Rapidly evolving AI tools that can flow from a developer to an adapter to an end user are stirring a growing legal conundrum of just who’s to blame when something goes wrong.
Is it the company who built the artificial intelligence model liable or a different one that provided the data? Or is it a third party who is involved in some way? And what about the actual user of the model?
There are no easy answers, legal and technology specialists say, even as “AI incidents” continue to rise. The number of AI incidents, where harm or near-harm happened climbed to a record high of 233, a 56.4% rise from 2023, according to the 2025 AI Index Report from Stanford University. The concerns are growing in tandem with widespread acceptance of artificial intelligence, with 78% of organizations in 2024 saying they use AI in some form.
“AI tools are now ubiquitous—but the realm of AI liability is still the Wild West, with untested legal theories making their way through the courts,” said Graham Ryan, an attorney at Jones Walker who litigates complex cases including those involving technology and AI.
There is no robust, well-settled set of industry standards to determine how different actors should conduct themselves at different phases of this value chain, Ryan said. “While we have 50 different state tort laws, it’s a very fact-intensive analysis and we don’t have much precedent on the AI front,” he said.
His advice: Companies should pay attention to existing tort and intellectual property law, as well as emerging AI-specific legal frameworks and guidance from federal agencies. Courts could increasingly look to industry standards put out by bodies like the National Institute of Standards and Technology, he said.
“We are beginning to see more cases focused on negligence and products liability—particularly defective design and failure-to-warn claims,” Ryan said. “But it is unsettled whether the functionality of certain AI systems render them ‘products’ for purposes of product liability laws.”
One recent case involves Character.AI, a customized bot that can generate text responses similar to humans. A mother sued Character Technologies and Google, alleging that the bot encouraged her 14-year-old son to take his own life. The companies have said there is no legal basis for her argument.
The challenge is that the output of generative AI models are not really explainable or transparent right now, said Greg Smith, a policy analyst who focuses on emerging technologies for the RAND Corporation, a think-tank. “We don’t have a perfect explanation why they output what they output,” Smith said. “And therefore it can be very hard, at least in what you know is traditional tort liability terms, to identify who actually messed up.”
In simple terms, it is extremely difficult to figure out who even made a mistake that could potentially be liable for injury to a third party, Smith said.
The most common AI lawsuits currently deal with algorithmic bias and discrimination, said Katherine Forrest, a partner at Paul Weiss, who chairs the firm’s digital technology group.
But there is more uncertainty around the newer tools that incorporate generative AI, especially open source models, which can be used or modified by anyone.
“There will be some negligence lawsuits that will be fought out in the courts about whether or not there is a tool which is acting in a way that somebody should have known was going to be harmful and didn’t take appropriate steps,” Forrest, previously a U.S. district judge for the Southern District of New York, said. “But those haven’t been fully litigated yet, and so we don’t yet know how those are going to come out.”
Forrest predicts that common law which has developed over hundreds of years will adapt to this latest technology challenge. The closest analogy, she said, was the rise of the internet and the lawsuits around who owned online rights to recorded music although AI applications have much broader usage such as video games and financial services.
“There will be many carefully thought out contractual arrangements between the developers and the tool users to try and figure out where, as I call it, the baton gets passed, where the liability baton gets passed from, let’s just say the model developer to the user,” she said. “And there may also be an insurance company that gets involved and that takes on certain risks.”
A lot of AI compliance is now evaluating the risks from an AI tool and identifying where liability begins and ends, Forrest said. That might mean that a company decides not to use AI for a certain task because the risk is too high.
The fundamental challenge with AI is that it is not like traditional software, said Karthik Ramakrishnan, the CEO and founder of Armilla AI, which identifies risks in artificial intelligence systems and provides AI liability insurance to firms.
“If you ask the same question to a generative AI model, it will never have the same output, exactly the same output,” said Ramakrishnan, who counts the Boston Consulting Group among his clients. That’s because it is approximating the answer, he said.
“So it’s not a matter of if,” Ramakrishnan said, “but when it’s going to make a mistake when that approximation deviates too far from the expected accuracy.”
Written by Kaustuv Basu
Original published by Bloomberg Law