The hottest topic nowadays is the artificial intelligence (AI) chatbot called ChatGPT. Since November, the company OpenAI has allowed the public to directly converse with the AI tool, impressing users with its human-like answers to any question. We now see truly intelligent AI that can help us in ways we only previously imagined.
Is this really the case? I would say: “Not quite.” We must fully understand the proper use as well as the risks that come with this latest AI tool before embracing its use.
First, the problems caused by business use of earlier-generation AI algorithms have not been solved. Some examples:
• Social media and streaming service algorithms have led to addiction, depression, and social conflict among users
• Political operators have used social media algorithms to misinform, manipulate, and divide voters
• Self-driving algorithms in cars and planes have been linked to the deaths of several people
• Algorithms used for approving bank loans, hiring job applicants, and suggesting policing strategies and jail sentences have been shown to develop dangerous biases
AI has been deployed in deceptive ways that gave it too much credit for “intelligence” without sufficient regard for the risks involved for users or the public.
In the second place, I’m impressed with the seemingly knowledgeable outputs of ChatGPT. I usually discover factual errors when I check its answers for accuracy. For example, it repeatedly gave me the wrong way to format a journal article and attributed articles I never wrote to me. AI developers call these “hallucinations.”
And herein lies the problem: a large language model does not really “know” or “understand” anything, even when it appears to do so. Computer scientists “trained” the model to talk like a person by feeding it enormous amounts of human text data from various internet and digital sources. Computational formulas (algorithms) in the model calculated patterns and correlations based on the text data until it “learned” to produce human-like answers to questions asked of it. Thus, a language model is like a computerized parrot that mimics human speech by observing patterns in how people talk about various topics.
Remember that a language model is not intelligent even when it sounds like it is. It has no sense of the meaning, real-life context, underlying reasoning, or intent behind what it is saying. Worse, its output is affected by the errors and biases in the data fed into it; as they say: garbage-in-garbage-out. Hence, language models, or AI in general, cannot be trusted by themselves for important information needs or for making critical decisions.
Clearly, the government needs to regulate AI properly for appropriate business use. Meanwhile, businesses can maximize the benefits (Do good) and avoid AI use’s sins (Do no harm) by following four basic principles.
TO DO GOOD, BUSINESSES MUST:1. Educate AI users to fully exercise informed consent on the use of their data use to ensure their personal benefit. Businesses must explain how personal and other data are by AI to benefit the user, without overpromising such benefits merely to promote use. For example, such cautionary guidance is given to potential investors in financial products. This must also apply to AI use.
2. Use AI to promote human well-being. People need ways to improve their health, sharpen their critical thinking, and better understand others. AI tools like Pol.is, for example, enable people with diverse or opposing viewpoints to have conversations and find common ground.
TO DO NO HARM:1. Fully test the AI tool in various contexts of use to understand and mitigate any risks to users. Technology-based tools, from cars and power drills to computers and microwave ovens, have been meticulously tested by engineers to check for potential failures, safety issues, and other unintended harms to users. Such testing and safety protocols should apply to AI tools as well.
2. Fully warn users about the negative effects of AI and moderate against excessive use. The tobacco industry had to be forced by law to disclose that smoking is addicting and can lead to serious diseases. Such warnings should apply to AI tools. Hans-Goerg Moeller, a professor of philosophy and YouTuber, issues this excellent warning at the end of each of his videos: “This video is produced to attract your attention and to promote this channel. The platform you are using is designed to be addictive and to mine your data for profit.” All businesses should issue such warnings as they apply.
3. Television host John Oliver summarized my main point very well: “The problem with AI right now isn’t that it’s smart. It’s that it’s stupid in ways that we can’t always predict.” If we remember this simple fact, we can be critical users of AI.
Benito L. Teehankee is the Jose E. Cuisia professor of business ethics at De La Salle University.