Artificial Intelligence (AI) innovations continue to pose particular challenges to existing legal frameworks, particularly in the realm of assigning liability due to the lack of discernible intentions, which is traditionally important in establishing liability. This problem is addressed in a new paper from Yale Law School, which suggests employing objective standards in regulating AI.
By viewing AI programs as tools used by human beings and organizations, these parties are regarded as being responsible for the AI’s actions. Essentially, humans and organizations should bear the responsibility for any harm arising from AI use. This argument is analogous to principles of liability in tort law, where employers are accountable for the actions of their employees undertaken in their course of work. Likewise, the parties employing AI should be liable for the AI’s actions.
Four objective standards proposed include designing AI systems with reasonable care (Negligence), requiring the highest level of care in high-risk applications (Strict Liability), ensuring no reduction in the duty of care when AI substitutes human agents, and ascribing intentions to AI programs considering they intend the reasonable and foreseeable consequences of their actions (Mens Rea).
The paper recommends applying external standards of behavior to AI programs according to what a reasonable person would do in the same circumstances. It outlines two practical applications, focusing on defamation and copyright infringement. For defamation, AI systems should be likened to defectively designed products; system designers should institute safeguards to mitigate the risk of generating defamatory content. AI users must exercise care in forming prompts and verifying AI output. For copyright infringement, it recommends AI companies to secure copyrights and take reasonable precautions to prevent copyright violations as a prerequisite for a fair use defense.
The paper concludes by treating AI actions similarly to human actions under agency law will ensure humans and organizations, as principals, are held accountable for the actions of their AI agents. This approach ensures there’s no reduction in duty of care, a crucial step in establishing legal accountability in AI regulation.