Economy

Colleges are going to have to put ChatGPT on the curriculum

IN RETROSPECT, my late summer to-do list was laughable. Among 20 other items to accomplish “before the semester begins” was this innocuous bullet point: “Write my AI policy.”

That’s like writing “prepare for storm” while in the eye of the hurricane.

Forecasters in the media had warned me since the spring, so why wasn’t I better prepared? In part because I’m old-fashioned, a late adopter. I’m a scholar of ancient, timeless things, a professor of theology who also teaches Greek and Latin and Coptic. I’m more comfortable decoding papyri unearthed from the desert than re-coding chatbots in the cloud. And I suppose I had stayed put during previous waves of educational technology, which were usually overhyped. Indeed, I had experimented with the generative AI platform ChatGPT when it was first released -— and was not impressed.

ChatGPT can’t adjudicate the good from the bad, I had thought. It writes stilted prose, with occasional hallucinations and low aptitude for direct quotation. It’s a powerful aggregator of internet discourse, to be sure. But I thought there was a five-year window to figure out how to adapt our educational methods and goals to generative AI.

Nonetheless, I had blocked off a recent morning to read up on the technology, plug in some of my favorite essay prompts for my classes, and then write my AI policy. But a lot had changed in a year. GPT-4 was now generating decent work about complicated questions, in mere seconds per essay. With just a few minutes of refining prompts, editing and plugging in quotations, these would be above average student essays. I had not seen an excellent essay worthy of an A grade yet, but the competence to produce a good (albeit formulaic) one was now evident. Some prompts:

• “Give me some options for a bold thesis statement about the future of abortion policy.”

• “Were Jesus’ teachings in the Sermon on the Mount really good advice for daily life?”

• “Analyze the strengths and weaknesses of Professor Michael Peppard’s scholarly writings.”

It gives surprisingly coherent and meaningful responses to all of these, and its criticisms of my own published work are, sadly, accurate.

Some of my assignments require creative or first-person writing. So I prompted GPT-4 to write a personal essay about a young girl who had just made a perilous migration from a violent family in Honduras to a bus stop in Texas, and the only possession she had left was her rosary given to her by her grandmother. Not only did the AI write a coherent story on the first attempt, but it also used metaphors accurately and made symbolic connections that read realistically:

My family was fractured — broken shards that could never form a complete picture again. … Before I left, Abuela handed me a rosary. “Your North Star, she whispered, as she pressed it into my palm.”

Is this an excerpt from a work of great literature? No, but GPT-4 produced a competent narrative with some poignant moments. It generated the pivotal metaphor of rosary as “North Star” — a doubly meaningful symbol for the Catholic migrant’s journey northward. 

Maybe I shouldn’t have been surprised. So much of literature’s meaning and emotion emerges from the manipulation of symbols, and large language models like ChatGPT have been coded specifically to do just that. Not only did I now understand the power of this technology to disrupt education, but I also see the Hollywood writers’ strike with new eyes.

As someone whose career has been built on analytical reading and generative writing, I needed someone to talk me off the professional ledge, to tell me the storm isn’t as scary as it seems. I called up my friend Mounir Ibrahim, who works at Truepic, Inc., a leader in digital-content authenticity. After a long conversation, he convinced me that what I am seeing now is already old technology, and that the current capacities are already far beyond what I am using on a publicly available interface. He persuaded me to change my educational methods and assessments immediately and, in this new world of AI, to reassess what education is for.

This fall semester needs to be a period of rigorous questioning and experimentation for teachers at all levels. If AI can generate a cogent essay template about the role of religion in the Roman Empire, then should I retain this essay prompt in my class? More generally, is learning to write an analytical essay still a central goal of a liberal education? If not, what else should we be doing?

Perhaps we should reconfigure our courses to emphasize the aspects of thinking and learning that we do better than AI does. We humans are (as of now) better at: asking questions, critical thinking, building and maintaining human relationships, analysis and prevention of bias, evaluating aesthetics, problem-solving about the present and future, ethical decision-making and empathy. What would it look like to build our courses around these features of our learning?

This semester will be in “sandbox mode,” as the gamers say, an exploratory mixing of the old world with the brave new one. Yes, we will read scholarship and write essays (in class on blue books), but we will also use generative AI individually and together. We will increase the frequency and modes of group collaboration and the development of higher-order questions that AI does not ask. I will re-introduce the most ancient assessment, the individual oral exam, while also requiring students to use generative AI on their first take-home essay.

Most importantly, we will critique the biases, omissions, and falsehoods of generative AI, in the model that I am calling “require and critique.” For some assignments, students will use generative AI and then, as their evaluated work, offer higher-order criticisms of its outputs based on other sources and inputs from our course. Finally, we will devote substantial time and effort to ethical analysis — the ultimate mode of intelligence that remains unique to humans, for now.

I know I’m still not ready. But the waves of some storms are too big to ignore or resist. The only choice, it seems, is to ride them.

BLOOMBERG OPINION

To contact the author of this story: Michael Peppard at mpeppard@fordham.edu

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

Your daily news source covering investing ideas, market stocks, business, retirement tips from Wall St. to Silicon Valley.

Disclaimer:

TheProficientInvestor.com, its managers, its employees, and assigns (collectively "The Company") do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice.
The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

Copyright © 2021 TheProficientInvestor. All Rights Reserved.

To Top