IMAGINE a bank robbery that is organized through e-mails and texts. Would the e-mail providers or phone manufacturers be held responsible? Of course not. Any punishment or penalties would be meted out to the criminals.
Now consider a harmful outcome brought about by artificial intelligence (AI): an act of libel, some bad advice that results in financial loss, perhaps instructions for a destructive cyberattack. Should the company that built the AI be held liable? Thoughtful people such as Ezra Klein and Luke Muelhauser argue that it should be, at least partially, as a way to encourage those companies to build safer services.
Unfortunately, it would be hard to make this plan work. Part of the reason is what economists call the Coase Theorem, which states that liability should be assigned to the least-cost avoider of the harm.
In the case of the bank robbery, the providers of the communications medium or general-purpose technology (i.e., the e-mail account or mobile device) are not the lowest-cost avoiders and have no control over the harm. And since general-purpose technologies — such as mobile devices or, more to the point, AI large language models (LLMs) — have so many practical uses, the law shouldn’t discourage their production with an additional liability burden.
For a simple counterexample, consider another imperfect analogy: a zoo that leaves the lion cage unlocked. It may well end up liable for a rampaging beast, since it could have addressed that problem fairly directly and at low cost.
This Coasean approach is not perfect for all decisions. But it is a rough initial guide to what is economically efficient.
As it stands, if a user receives bad medical information from a Google search, Google is not held responsible. Google has taken care to elevate more reliable medical results in response to common questions, as it should — but if someone clicks on the 27th link and decides to forgo a COVID-19 vaccine, the fault is rightly regarded as their own. Given the versatility of LLMs in responding to queries and repeated interrogation, it is not obvious that legal penalties can make all their answers accurate.
Similarly, books and maps have provided dangerous information to many criminals and terrorists. But liability for these kinds of crimes is generally not placed on the publisher. It is impractical to demand that all published information be the right combination of true and harmless. And what is the output of an LLM but a new and more powerful kind of book or map? (Or how about a more mischievous question: What if the LLM query requested that the answer be printed in the form of a book?)
On a more practical level, liability assignment to the AI service just isn’t going to work in a lot of areas. The US legal system, even when functioning well, is not always able to determine which information is sufficiently harmful. A lot of good and productive information — such as teaching people how to generate and manipulate energy — can also be used for bad purposes.
Placing full liability on AI providers for all their different kinds of output, and the consequences of those outputs, would probably bankrupt them. Current LLMs can produce a near-infinite variety of content across many languages, including coding and mathematics. If bankruptcy is indeed the goal, it would be better for proponents of greater liability to say so.
It could be that there is a simple fix to LLMs that will prevent them from generating some kinds of harmful information, in which case partial or joint liability might make sense to induce the additional safety. If we decide to go this route, we should adopt a much more positive attitude toward AI — the goal, and the language, should be more about supporting AI than regulating it or slowing it down. In this scenario, the companies might even voluntarily adopt the beneficial fixes to their output, to improve their market position and protect against further regulatory reprisals.
By no means have all liability questions about AI and LLMs been answered, nor is the Coasean approach a complete guide to policy decisions. But when a service provides such a wide range of useful information, more regulation or liability assignment is not necessarily the way to make it safer or more useful.
Some people are worried that AI is moving too fast. My point is that we should also proceed with caution as we consider limits and rules on AI.
BLOOMBERG OPINION