When is Using Generative AI just Plain Wrong?
Sometimes, the effort, not the perfection of the result, is the crucial ingredient.
Last week, Google pulled an advertisement for its Gemini product (a large language model (LLM) like ChatGPT) that showed a father helping his daughter use the system to write a note to an Olympic athlete.
Drafting such a note is a task that almost any LLM can do well, as is writing a thank you, sympathy, or birthday message. I do not doubt that this is one of the main ways people use LLMs, but in this case, there was a backlash against showing a child how to use a computer for this task.
"If this approach to communication becomes widespread (and Google is saying it will work hard to make it so), it will lead to a future dominated by homogenized modes of expression – a monocultural future where we see fewer and fewer examples of original human thoughts." writes Shelly Palmer, professor of advanced media at Syracuse University's S.I. Newhouse School of Public Communications. "As more and more people rely on AI to generate their content, it is easy to imagine a future where the richness of human language and culture erode." His pointed and non-AI written critique is here.
But there is a larger question here. When is it plain wrong to use an AI to write something?
Would you use ChatGPT to write a church service? I thought everyone could agree that it is a horrible use of AI, but someone did write a church service using ChatCPT, and a surprising number of people liked it.
How about writing a eulogy for a friend? Is that an acceptable use of AI? Even Harry Potter, who had access to much more reliable magic than an LLM, didn't use magic to bury Dobby. Sometimes, the effort, not the perfection of the result, is the crucial ingredient.
A Dean at Vanderbilt found this out the hard way when he used ChaptGPT to write an email to students expressing his sympathy for the victims of a mass shooting at another campus and encouraging those who need it to get help. Regardless of the quality of the letter, using a tool like this comes off as insincere, condescending, and disrespectful. As one Vanderbilt student said, it was "disgusting." By the way, he is no longer a dean.
One of my first discussions with my students this fall will be how we use AI in the class. Students are used to professors restricting student use of AI, but they are surprised when I ask about the policies we should add to the syllabus to govern MY use of AI in the class. I start by pointing out that I have taught nearly ten thousand undergraduates, and I could use the grades from the past to quickly train an AI on what A, B, C, D, and F essays look like. Then, I can feed their essays into the system and get an instant grade that would not depend on my mood, tiredness, or other factors that might affect their grade. Wonderful. Plus, I could assign 40-page papers weekly without worrying about the grading load!
The students are very quick to identify problems with such a system (primarily motivated by the terror of realizing they might have enrolled in a class with a maniac as the instructor of record).
Most of them focus on privacy issues (Where does that information go? Who has access), pedagogy (What is the actual value of an essay assignment?), or arguing that if I use AI for grading, then they can use AI to write the essays (it always amazes me that some students are willing to enter into an AI-writing arms race).
After a little bit, I usually get a comment like, "You are paid to teach, and grading is part of that work, so you need to do it." I agree with this response, but I first ask them to think through what it would be like to receive an AI grade.
What if you were in my class, I gave you an "F" for a paper, and you came to my office to ask, "Why?" I said it was because the AI said so or only provided you with the AI's feedback? That would not be very satisfying or useful.
But if you were in my class, I gave you an "F" for a paper, and you came to my office to ask, "Why?" and I started a diatribe about your use of semicolons, the unconfident writing, repeating ideas, etc., you might decide that I was an asshole, rip me a new one on Rate my Professors, and tell all your friends to avoid my class. But at least I will be taking responsibility for it.
I would not use an AI to grade my students' essays. Others might disagree, but I think authenticity and ownership over the process are more important than pure efficiency or accuracy. Even if it takes me weeks to get assignments graded (students do rightly rip me for this on the evals when it happens).
Our class policy usually settles on no AI for grading but I can use AI to make lessons, but I must disclose how I used the AI.
What do you think? Let me know in the comments: When is it wrong to use an AI? When does the method matter? There seems to me a lot of disagreement on this issue. Please feel free to disagree with me.
My commentary may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. I ask that you edit only for style or to shorten, provide proper attribution and link to my contact information.
📥Recent Talks, News and Videos
I gave a talk for the League of Women Voters and Danial Boone Library: Before You Vote: Artificial Intelligence, the Elections and Civic Dialogue.
I was quoted in this: KSHB 41 story: Fast food value meal wars heat up.
The University of Missouri is suggesting up a number of experts on AI for chatting with journalists and podcasters, including me!
📆 Upcoming Talks/Classes 👨🏫
I will be talking at the Workshop on Emerging Technologies for Digitalization at the Asia-Pacific Economic Cooperation meeting in Lima, Peru, on August 12. More information and will be available on the APEC Peru website.
I will be presenting “Managing the Learning Machine” at 8:00 AM on September 10th for the MU Retiree’s Association (In Person and Zoom). More information and Registration will be available on MU Retiree’s Association website.
My friend and colleague, Sophia Rivera Hassemer, is teaching “Technology Potpourri” for Osher on Sept 12, 19, 26, and Oct 3 from 9:30 to 11am, and I will be her assistant! It will be in person only at the Moss building, and will be very hands on with technology. More information and Registration will be available on the Osher website.
I will give a talk on Artificial Intelligence and The Elections on Tuesday, September 10, 6:30pm - 8:00pm at the Missouri River Regional Library in Jefferson City. More information is available on the Missouri River Regional Library website.
I will present “Harnessing AI for Nonprofit Growth” from
10:45 - 11:45 a.m., on November 7 via zoom. More information and Registration will be available on the New Chapter Coaching website.I will present “AI: Current Trends and Future Directions” for the Mid-Missouri PMI Chapter on November 12th at 7:30am via zoom. Registration will be available on PMI Mid-MO Chapter's website.
To write clearly, one must think clearly. I don't know how many times I felt I had a good idea but discovered it was not so good after all when I went to write it down. We will become less rigorous thinkers if we use ChatGPT for anything other than mundane uses (e.g., writing a satirical limerick).
Just came across this - I agree. Here is my syllabus statement.
Instructor Use of Generative AI
I will not use AI tools to grade or provide feedback to students (other than auto-graded quizzes). If I were taking a class where the teacher was going to copy/paste (or upload) my paper into an "auto-grader with feedback," I wouldn't bother trying to do a good job. I will always read every assignment, and all feedback will be mine, not some "grading bot." When I use generative AI to improve the class, I will use it to improve the class (e.g., “theme song,” visual illustrations, brainstorming ideas, feedback on my writing), not to make the class worse. I will try to provide citations and transparency as much as possible, although sometimes the use of generative AI is implied (e.g., do you really think I can write, record, and produce a theme song by myself?).