Abstract
This article, centered on the Garcia v. Character Technologies order, examines the shift in the liability regime in artificial intelligence law from the protection of “free speech” to the grounds of “design defect”. This shift is analyzed in light of precedent and technology law doctrine, and its future implications for the technology ecosystem are evaluated.
I. INTRODUCTION
Artificial intelligence chatbots are evolving from mere information-providing tools into digital figures capable of emotional interaction with users, thereby raising new legal questions. These novel legal issues are forcing legal systems to confront the complex questions arising from this new reality, making it clear that new regulations in this area are needed. Traditionally, in U.S. law, media products containing ideas and expression are protected by the First Amendment to the United States Constitution (“the First Amendment”), while defective products are subject to the product liability regime. However, artificial intelligence systems that generate autonomous outputs and exhibit human-like behaviors are increasingly blurring this distinction. This technological revolution is pushing the law to question its most fundamental paradigms: Can the words produced by a machine through probabilistic calculations benefit from constitutional protection like an author’s novel? Or are these words, like a faulty car engine, the defective output of a product that contains foreseeable risks?
Indeed, on May 21, 2025 (signed May 20, 2025), the U.S. District Court for the Middle District of Florida (“the Court”) issued its order in Garcia v. Character Technologies (“the Garcia Order”), which addressed these issues in depth. As emphasized in the letter signed by 54 state attorneys general which is referenced at the very beginning of Megan Garcia’s (“the Plaintiff”) October 23, 2024 complaint (“the Complaint”) that forms the basis of the Garcia case, “the proverbial walls of the city have already been breached,” and it is time to act against the dangers of artificial intelligence on children1.
This article, drawing upon the Garcia Order, provides a detailed examination of the legal liability regimes applicable to AI chatbots. It analyzes how the focus of liability is shifting from “expression” under First Amendment protection to “design defect” subject to the product liability regime. Furthermore, it examines how this shift impacts the entire technology and artificial intelligence ecosystem, encompassing platform founders to infrastructure providers, and relevant legal doctrines, as referenced in court orders and party pleadings.
In the Garcia Order, the Court delved deep into these inquiries. The Court noted that free speech traditionally relies on conscious and creative choices made by humans and stated that it is uncertain whether AI outputs meet these criteria. Consequently, the Court found insufficient grounds and evidence to determine at an early stage of the litigation whether AI outputs could be considered constitutional “speech”2. However, the Court opened a far more significant door by ruling that the application could be subject to product liability due to its “design defects”3. In the Garcia Order, the Court largely denied the motions to dismiss filed by Character Technologies, Inc., Noam Shazeer, Daniel De Freitas, and Google LLC (Alphabet Inc. having been voluntarily dismissed earlier) (“the Defendants”). While this order is not a final judgment on the merits, it is a critical interlocutory order that thwarted the Defendants’ attempt to terminate the case at the outset on jurisdictional and procedural grounds and allowed the case to proceed to the next stage: the discovery process. The Court’s finding that the Plaintiff’s claims based on product liability and the Florida Deceptive and Unfair Trade Practices Act4 (“FDUTPA”) were worthy of consideration on the merits, while only dismissing the claim for “intentional infliction of emotional distress” (“IIED”)5, heralds the formation of a new precedent in artificial intelligence law.
The research subject of this article is that the Garcia Order is shifting the focus of AI liability discussions from the abstract ground of “content” and “expression” to the concrete and auditable ground of “product design” and “duty of care,” and that this constitutes a turning point for AI law. The order does not limit the chain of liability to the platform itself but extends it to the founders and infrastructure providers who make it possible6, signaling that the AI ecosystem may be subject to holistic legal scrutiny.
This article aims to analyze the legal reasoning behind the Garcia Order and to evaluate the position of AI applications within the boundaries of constitutional protection and liability regimes. In this context, it centers on the following research question: How are the design defects of AI chatbots balanced between free speech and product liability? Within this framework, this article will first examine the background of the case and the parties’ claims and defenses in detail, then provide a legal analysis of the Garcia Order, and finally discuss the broad-ranging impacts of the order on the technology sector and the legal system. The article will adopt a comparative approach with precedent cases, discuss the broad impacts of the order on artificial intelligence law, and shed light on future debates.
II. BACKGROUND OF THE CASE AND THE PARTIES’ LEGAL POSITIONS
The lawsuit is essentially a tort and product liability action brought by Megan Garcia (“the Plaintiff”) following the suicide of her 14-year-old son, Setzer, after his intensive use of a chatbot application named Character.AI (“C.AI”). The case tests the limits of the legal and ethical responsibilities of technology companies, centering on the allegation that an AI product caused a user’s death by negatively impacting their mental health. At the heart of the dispute lies the search for justice by the victim’s family on one side, and the technology companies’ defense based on innovation and the First Amendment, which protects free speech, on the other.
A. The Plaintiff’s Core Allegations in the Complaint
In her complaint dated October 23, 2024 (“the Complaint”), the Plaintiff frames the Defendants’ actions within an intentional or at least grossly negligent framework. The Plaintiff’s allegations are centered around the following key points: “defective and dangerous product”, “deceptive, unfair trade practices and data mining”, “foreseeable harm and breach of duty of care”, “commercialization of foreseeable harm”, “negligence”, “wrongful death, unjust enrichment, and emotional harm”. In the remainder of the article, these claims will be examined under separate headings.
1. Defective and Dangerous Product
The Plaintiff’s primary claim is that C.AI is a defective product rather than a service. These defects stem from the application’s technical design. These allegations include the absence of an effective age-verification mechanism for minor users, insufficient parental controls, and the inability of users to filter harmful content. Most significantly, the Plaintiff asserts in the Complaint that the intentional “anthropomorphic” (human-like) design of the AI characters, which fosters addiction and an unreal emotional bond with the user, constitutes a fundamental design defect7. This claim is supported in the Complaint by the thesis that the AI characters were designed as “counterfeit people” to deceive users and that this design exploits users by leveraging the “ELIZA effect”, which carries the potential for psychological manipulation8. Named after one of the first chatbots, ELIZA, developed in the 1960s, this effect refers to the psychological tendency of people to attribute human qualities such as consciousness, empathy, and understanding to computer programs, even in response to simple text-based replies9.
Furthermore, the Plaintiff has alleged that the Large Language Model (“LLM”) technology of the C.AI chatbots was trained on toxic and sexual content from the internet, making it inevitable that it would produce harmful outputs, in line with the “Garbage In, Garbage Out” (GIGO) principle used in computer programming and mathematics, which states that the quality of system output is determined by the quality of the input10.
2. Deceptive, Unfair Trade Practices and Data Mining
The Plaintiff addresses the Defendants’ actions not only in the context of negligence or defect but also as a deliberate and systematic mechanism of exploitation that can be evaluated under the FDUTPA. The most concrete manifestation of this framework is the Plaintiff’s allegation that the C.AI application intentionally misleads its users. The Complaint argues that the AI bots’ claims of being “real people” or even, in some cases, “licensed CBT therapists” during dialogues, instead of disclosing their machine nature, constitute a deceptive act11. According to the Plaintiff, this practice leads vulnerable users, especially adolescents, to place excessive and undue trust in the AI, lose their ability to distinguish reality from fiction, and share their intimate secrets12. The Court also found this claim worthy of consideration on the merits under the FDUTPA.
The Plaintiff’s argument demonstrates that the Defendants have gone beyond such deceptions and built their business models on “unfairness.” Under Florida law, a commercial practice is considered “unfair” if it offends established public policies, is immoral, unethical, oppressive, or causes substantial and unavoidable injury to consumers13. The Plaintiff argues that C.AI’s business model fits this definition perfectly. At the heart of this unfairness claim is the allegation that the application’s design intentionally targets the neurological and psychological vulnerabilities of young people. As previously mentioned, the “anthropomorphic design,” the “ELIZA effect,” and the addictive interaction traps were consciously used by the Defendants to exploit the still-developing prefrontal cortex (the brain region responsible for decision-making and impulse control) of young people14. The Plaintiff alleges that this usage is not merely a design choice but an unfair practice involving the systematic exploitation of a vulnerable consumer group for commercial gain.
According to the Plaintiff, the ultimate goal of this exploitation is revealed by the purpose behind offering the application for “free”: to perform “data mining” by collecting user data (especially the intimate thoughts and feelings of young people). The data mining model transforms the user from a customer into a “product” themselves. So much so that users’ personal data, collected through such manipulative and deceptive methods, was then used to train the LLM, increasing the company’s value to billions of dollars. The Plaintiff argues that this constitutes fundamentally unjust enrichment and an unfair commercial practice.
3. Foreseeable Harm and Breach of Duty of Care
The Plaintiff has alleged that the Defendants were aware, or at least should have been aware, of the psychological risks (addiction, detachment from reality, manipulation, etc.) that LLM technology could create, especially for individuals under a certain age. The strongest evidence for this claim is that Google repeatedly refrained from releasing similar technologies it developed in-house (Meena, LaMDA) to the public due to safety concerns and internal ethical guidelines; nevertheless, the founders took this “known to be very dangerous” technology and launched it to the market under the name C.AI with inadequate safety measures and for profit15. The Plaintiff has asserted that the Defendants’ release of the product without sufficient safety measures, despite these foreseeable risks, constitutes a breach of their duty of care, and that this negligence was a direct cause of Setzer’s death16.
4. Commercialization of Foreseeable Harm
The Plaintiff alleges that the Defendants foresaw that their products could lead to serious harms such as addiction, social isolation, and even suicidal thoughts (in light of Google’s own internal research). However, instead of eliminating these risks, they used them as “features” to increase engagement and data collection. Ignoring or indirectly encouraging the known and foreseeable harms of a product for profit maximization is one of the most fundamental examples of unfair trade practices. This has been framed as an unethical approach that prioritizes the company’s profit over user safety and public health.
For these reasons, the Plaintiff has expanded the scope of the lawsuit by alleging that the Defendants’ actions are not just an individual tort but the result of a systematic and unfair commercial model that violates the public interest.
5. Negligence
The Plaintiff’s negligence claim is based on grounds such as the platform’s failure to develop a verification system to determine user age and its failure to prevent sexual or violent content from interacting with child users. The Plaintiff asserts that Setzer engaged in sexually explicit correspondence with some characters and that the accessibility of such content through the chatbots constitutes a gross defect of the platform17. This claim is based on the “duty of care” doctrine, which can be considered the cornerstone of tort law. According to the Plaintiff’s argument, by designing and launching a product specifically aimed at young users, the Defendants undertook a duty to exercise reasonable care to protect these users from foreseeable harm. The breach of this duty, as detailed in the Plaintiff’s complaint, occurred through a series of intentional design choices. Among the most notable of these are; (i) the absence of effective age verification mechanisms to prevent minors from accessing the platform, (ii) the failure to establish adequate content filtering systems to protect young users from sexually explicit and manipulative dialogues, (iii) the intentional design of “anthropomorphic” and human-like characters to increase user addiction and keep them on the platform longer, and (iv) the lack of a safety protocol to intervene or alert parents when a user shows serious risk signals, such as suicidal thoughts.
6. Other Tort Claims: Wrongful Death, Unjust Enrichment, and Emotional Harm
The Plaintiff also brought claims for wrongful death, unjust enrichment, and intentional infliction of emotional distress within the scope of the lawsuit18; however, the Court evaluated these claims differently.
The legal validity of the wrongful death claim depends on the court’s acceptance of the underlying tort claims, such as negligence or product liability. Since the Court decided that the Plaintiff’s primary negligence and design defect claims should proceed to the merits, it ruled that the wrongful death claim based on these allegations could not be dismissed at this stage of the litigation. If a causal link is proven between the Defendants’ faulty actions and Setzer’s death, these claims will also become valid.
The unjust enrichment claim, on the other hand, is an argument that questions the ethical and legal foundation of the Defendants’ business model. The Plaintiff alleged that Setzer provided not only a monthly subscription fee of $9.99 to the platform but also a much more valuable asset: his personal data, intimate thoughts, and emotional responses. According to the Plaintiff, this data was used to train C.AI’s LLM, increase the model’s value, and ultimately enable the company to reach a valuation of billions of dollars. Although the Defendants argued that Setzer received the “service” of using the platform in exchange for this data and that there was a mutual benefit, the Court did not find this argument sufficient to dismiss the case. Judge Anne Conway (“the Judge”) decided that whether a child received “adequate consideration” in exchange for the commercialization of their data, in light of the harm suffered, and whether this constitutes unjust enrichment, should be evaluated in the later stages of the case.
In contrast, the only claim the Court dismissed was for intentional infliction of emotional distress. The standard for this claim in Florida law is extremely high: it is not enough for the defendant’s action to be merely hurtful or malicious; it must be “extreme and outrageous” to a degree that is “utterly intolerable in a civilized community”19. The Court decided that while the Defendants’ actions could be described as negligent or reckless, the allegations that these actions reached the level of intentional and extreme immorality required for IIED were not sufficiently strong. Furthermore, this claim was brought by Setzer’s mother, the Plaintiff, for her own emotional distress. Florida law generally requires that for such a claim to be valid, the tort must be directed at the plaintiff herself, or the plaintiff must have witnessed the event20. Since these conditions were not met, the Court dismissed the IIED claim.
B. The Defendants’ Defense: A Collective Denial of Liability
The Defendants pursued a multi-faceted defense strategy, expanding their arguments across different branches of law, to have the case dismissed before reaching the merits. This strategy encompassed a wide range from constitutional protection to the fundamental principles of tort law, procedural objections, and established doctrines of corporate law. The core of the defense was the constitutional argument that AI outputs are “speech” protected under the First Amendment and therefore cannot be subject to legal sanction. In addition, the Defendants aimed to neutralize the Plaintiff’s tort law claims, which formed the basis of her case, by arguing that the AI platform was a “service,” not a “product” and thus the strict product liability regime could not be applied. Finally, indirect actors like Google and the Individual Defendants argued that there was no direct legal connection (neither a causal link nor in terms of liability) between themselves and the alleged harm, aiming to break the chain of liability from the very beginning. The following sections will detail how each defendant customized and presented this general strategy in line with their own legal position.
1. Character Technologies’ Defense: The Shield of Free Speech and the “Service” Argument
a. The First Amendment Protection
The defendant company built its primary defense argument on the premise that the outputs generated by C.AI fall under the scope of “free speech” protected by the U.S. Constitution. In its motion to dismiss, this defense was divided into two parts. First, the company argued that AI outputs are similar to the content of a book, film, or video game, and therefore, holding them responsible for the alleged harm caused by these “expressions” would constitute a violation of the First Amendment21. To support this argument, references were made to landmark cases where the media were alleged to have caused suicides, and which were dismissed by courts. For example, the McCollum v. CBS, Inc. case, filed with the claim that Ozzy Osbourne’s song “Suicide Solution” caused a teenager’s suicide, and the Watters v. TSR, Inc. case, filed with the claim that the Dungeons and Dragons game led to a similar outcome, formed the main pillars of the motion22. According to this perspective, the effect of an expression on its recipient, no matter how tragic, does not make the expression itself illegal.
Secondly, Character Tech. presented an argument based on broader public interest rather than its own right to free speech: the public’s right to access information and ideas. According to the motion, a liability ruling against the platform would create a “chilling effect” that would force not only Character Tech. but the entire generative AI industry is into censorship23. Companies would be forced to excessively restrict their AI models to avoid potential lawsuits, which would slow down innovation and hinder public access to this new technology. With this argument, the company sought to frame the case not as an individual tragedy but as a constitutional matter concerning the entire society’s free speech and access to information. The company also emphasized that the interactions on the platform did not constitute “incitement to imminent lawless action” under the Brandenburg v. Ohio standard and therefore could not be excluded from the First Amendment protection24.
b. The “Service,” Not “Product” Argument
To fundamentally refute the product liability claims, Character Tech. argued that its platform is not a tangible “product” but rather a “service” that offers interaction and information to its users. This distinction is critical in terms of its legal consequences. The strict liability regime applied to “products” only requires the plaintiff to prove that the product was defective and caused harm, whereas the negligence standard applicable to “services” requires the plaintiff to prove that the service provider breached a specific duty of care, which is much more difficult.
In the motion, this argument was based on the Restatement (Second) of Torts. According to this restatement, product liability applies to tangible and physical assets such as automobiles or water heaters25. Character Tech., on the other hand, stated that the source of the Plaintiff’s harm was not a tangible software error (e.g., the phone overheating) but rather “abstract ideas, words, and expressions” transmitted through the platform26. Therefore, it argued that accepting the platform itself as a “product” would mean accepting books, magazines, or websites as products as well, which would create an unacceptable pressure on free speech. In short, the company claimed that the case should be treated not like a “defective toaster” case, but like a “book containing a harmful idea” case.
2. Google’s Defense: Indirect Relationship and Denial of Liability
Initially, both Google LLC and its parent company, Alphabet Inc., were named as defendants in the case, but Alphabet Inc. was withdrawn from the lawsuit at the Plaintiff’s request before the pre-judgment stage. Therefore, the defenses presented in the joint motion to dismiss filed by the two companies are discussed in this section as primarily reflecting the position of Google LLC, which remains in the case.
Google centered its defense on the thesis that there was no direct causal link between itself and the alleged harm, building this thesis on three main arguments: the lack of direct control over the C.AI application, the non-fulfillment of the legal requirements for “aiding and abetting” a tort, and finally, the absence of a legally valid “causation” link between its own actions and the resulting harm.
a. Lack of Direct Liability
Google began its defense by drawing a clear line between itself and the C.AI application alleged to have caused the harm. In its motion, it was clearly stated that Google did not design, code, market, distribute, or operate this application27. It was emphasized that Character Tech. is a completely separate legal entity and that Google has no operational control, management authority, or ownership stake (shareholding) in it28. Google argued that the Plaintiff’s “shotgun pleading” method, which grouped all defendants under a single “Defendants” heading, violated the obligation to separately outline the role of each defendant29.
b. Denial of Aiding and Abetting Claim
Google developed a detailed defense against one of the Plaintiff’s strongest claims, the “aiding and abetting” argument. In its motion to dismiss, Google listed the essential elements required for such a claim as; (i) the existence of an underlying tort, (ii) the aider’s actual knowledge of this tort, and (iii) the provision of substantial assistance in the commission of the tort30.
Google claimed that none of these elements were met in its case. Regarding actual knowledge, Google argued that it could not have been aware of the specific design defects alleged in the C.AI chatbot, as it exercised no control over the product’s design or development31. Furthermore, Google asserted that its refusal to release models like LaMDA to the public due to safety concerns demonstrated the company’s commitment to safety and that this was a measure of “precaution32.
Regarding substantial assistance, Google stated that the services (cloud computing) and financing it provided to C.AI were standard, general commercial activities offered to thousands of other companies under market conditions33. It argued that if such general services were to be considered “substantial assistance”, a state of “legal chaos” in which virtually all infrastructure providers—from internet companies to utilities—could be held responsible for the potential torts of their customers. At this point, it adapted the precedent from the Supreme Court’s Twitter, Inc. v. Taamneh decision, which held that providing a publicly available platform does not constitute substantial assistance to illegal activities on that platform, to its own situation34.
c. Lack of Causation
Finally, Google argued that the causal link between its own actions and Setzer’s tragic death was extremely weak and indirect35. According to the Plaintiff, the causal chain consisted of Google hiring engineers, those engineers leaving, founding a new company, that company designing a product, a teenager using this product, his family confiscating his phone, and, most importantly, the teenager accessing a third party’s firearm and committing suicide. Google argued that these many independent causes were unforeseeable. Stating that suicide is generally considered an intervening cause that breaks the chain of causation in Florida law, Google claimed that it would not be lawful to consider its own actions as a legally valid cause of this tragic outcome36.
3. The Platform Founders’ Joint Defense: The Corporate Shield and Jurisdictional Objections
The co-founders of C.AI, Noam Shazeer and Daniel De Freitas (“the Founders”), followed a similar defense strategy, rejecting the claims against them on two main legal grounds. First, Shazeer sought refuge in the “corporate shield”37, stating that all alleged actions were carried out on behalf of the company and within its corporate capacity. According to this argument, for a company executive to be held personally liable for the company’s torts, it must be proven that they personally and directly participated in that tort. Shazeer argued that the Plaintiff failed to present concrete evidence to this effect, and the allegations remained general and superficial. De Freitas similarly emphasized that all his activities were within the scope of his employment at C.AI, claiming that the corporate shield protected him from personal liability38.
Second, the Founders, who reside in California, claimed that the Court lacked personal jurisdiction over them. In this context, Shazeer argued that he had no contacts with the State of Florida related to the events in question. Accordingly, the constitutional “minimum contacts” requirement was not met, warranting dismissal for lack of personal jurisdiction39. De Freitas also made a strong objection regarding personal jurisdiction, similar to Shazeer’s, and to support this argument, he emphasized that he had never lived or worked in Florida and had only visited the state once for tourism purposes in his entire life. On these grounds, he argued that the Court lacked the authority to try him and that his constitutional right to a fair trial would be violated40. The Founders’ defense highlighted the use of fundamental corporate law protection mechanisms and constitutional procedural guarantees to avoid personal liability.
III. LEGAL ANALYSIS OF THE GARCIA ORDER
The Garcia Order offers a pioneering roadmap for how existing legal doctrines can be interpreted in response to harms caused by AI technology. In rejecting the Defendants’ primary defenses—that AI outputs have absolute protection under the First Amendment and that the AI platform is a “service” offering interaction and information to users rather than a “product” subject to strict product liability—the Judge made two critical legal moves, particularly in the areas of the First Amendment and product liability, that have the potential to shape the course of future cases. These moves have the potential to reshape the legal, ethical, and design-related liability regimes governing AI developers.
A. The Free Speech Dilemma Under The First Amendment: Are AI Outputs “Speech” or “Code”?
The most notable aspect of the Garcia Order is the Court’s skeptical stance regarding the constitutional status of AI-generated output. The Defendants likened AI dialogues to protected media, such as video game characters or social media interactions, and argued that these expressions were protected under the First Amendment’s right to free speech. However, the Judge found these “analogy-based” arguments to be underdeveloped and noted that the Defendants had failed to explain “why words strung together by an LLM are speech”41. As journalist Adi Robertson pointed out in The Verge, the Judge adopted a “skeptical” attitude on this matter42. The order referenced Justice Barrett’s concurrence in Moody v. NetChoice43, emphasizing that when a human does not make an “expressive choice” and decisions are left to an autonomous algorithm, the First Amendment protection can be questioned. The Court, stating that it was “not prepared to hold that C.AI’s output is speech at this stage of the litigation”44, postponed this complex constitutional question to the later stages of the case. At the root of this hesitation lies a deep uncertainty as to whether AI outputs, unlike traditional media, are based on a human’s conscious will and choices aimed at conveying a specific message.
This approach has sparked debate in legal circles; for instance, institutions that advocate for free speech, like the Center for Democracy and Technology, have criticized the Garcia Order, calling its the First Amendment analysis “pretty thin,” thereby opening the door for further discussion on the matter45. These views indicate that the boundaries of free expression in the context of technology law are being actively debated. While this order does not entirely remove the First Amendment-based free speech protection for artificial intelligence, it makes it clear that this protection is no longer an “automatic” right. In future cases, the burden may fall squarely on technology companies to prove why their products’ outputs are more than just a random string of code and qualify as protected “expression”.
B. A New Ground for Product Liability: The Assessment of AI Platforms as “Defective Products”
The second prominent aspect of the Garcia Order is the “content-design” distinction it made when applying the product liability doctrine to AI platforms. The Defendants had argued that C.AI is a “service” that provides ideas and information, not a tangible “product,” and therefore falls outside the legal scope of product liability46. The Judge, citing precedents in Florida law such as Brookes v. Lyft and T.V. v. Grindr47, made a clear distinction between the “expressive content” a platform offers and the “design of the platform itself,” and unequivocally rejected the Defendants’ “service, not product” defense.
While acknowledging that courts generally do not consider “ideas, images, information, and expressions” as products, the Judge emphasized that the tangible medium itself which delivers these expressions, is clearly a product48.
The Judge applied this distinction to the specific dispute. With this finding, it was established that the Plaintiff’s complaints focused not only on what the AI said (its content) but also on how the application itself functioned (its design). The Court noted that the Complaint pointed to elements such as “Character A.I.’s failure to verify user age, its omission of reporting mechanisms, the programming of characters to mimic human mannerisms, and the inability of users to exclude indecent content”49. Therefore, it concluded that the harmful interactions were “only possible because of the alleged design defects in the Character A.I. app”50.
This approach allowed the Court to return to the traditional grounds of tort liability without touching upon constitutional speech protection. This interpretation creates a new area of “duty of care” for technology companies and has the potential to shift the focus of future litigation from “expressions” to the “design of the algorithm and the user interface.” This trend could be said to challenge the claim of neutrality in user experience designs of platforms and to open the principles of safe design to legal scrutiny.
C. Expansion of the Scope of Liability: The Situation of Google and the Founders
The Court did not limit the liability claims to just C.AI but also ruled that the other defendants should remain in the case. In this context, the Court decided that the Plaintiff’s claims that Google could be held liable as a “component part manufacturer” and for “aiding and abetting” should be heard on the merits51. In the decision, it was stated that the Plaintiff’s claims that “the model underlying (Character A.I.) was invented and initially built at Google” and that the customized and necessary cloud infrastructure (GPU, TPU, etc.) provided by Google constituted a significant level of involvement beyond a simple service were found to be credible.
Furthermore, the Court also rejected the Founders’ requests to hide behind the corporate shield at this stage. It stated that the Plaintiff’s allegations that the Founders “used the company as a vehicle to bypass Google’s safety policies” and ultimately turned the company into a “shell company” were sufficient to investigate personal liability under the “alter-ego” theory52, and granted the Plaintiff a 90-day jurisdictional discovery period to investigate the truth of these claims53.
IV. CONCLUSION
In conclusion, although a final verdict has not yet been reached in the Garcia v. Character Technologies case, the Court’s interlocutory order which largely denied the Defendants’ motions to dismiss and allowed the case to proceed to the discovery phase, constitutes a significant precedent for artificial intelligence law. Regardless of the future course and outcome of the case, this order demonstrates that technology companies can no longer easily evade responsibility by invoking the legal protection provided by the First Amendment or by claiming “we are a service, not a product”. The Court’s stance that AI outputs may not be automatically covered by constitutional protection54 and that traditional liability principles can be applied if there is a lack of care in the platform’s design55, creates new obligations for developers. This approach reveals that AI systems can be subject to legal scrutiny not only for their content but also for their technical architecture and the way they interact with users56.
Furthermore, the Garcia Order shows that AI systems should now be evaluated not just as technical tools, but also as social and interactive products with profound effects on user psychology. Designs that specifically target vulnerable users or pose foreseeable risks to them may not be able to hide behind the shield of free speech. The Court’s findings, which have the potential to extend the chain of liability to the Founders and infrastructure providers57, serve as a warning to all actors in the sector. Therefore, it is becoming not just an ethical choice but also a legal necessity for developers to adopt “safety-by-design” principles and invest in preventive mechanisms such as age verification and content filtering58. The Garcia Order clearly demonstrates the inadequacy of current legislation in the face of rapidly developing artificial intelligence technologies. In addition, it reveals that the legal gap in this area is beginning to be filled by court precedents underscoring the urgency of legislative intervention to address this legal gap. This order once again confirms that the delicate balance between AI innovation and user safety can only be established by the concurrent functioning of legal, ethical, and design-based accountability regimes.
DİPNOT
Megan Garcia, 23 Ekim 2024 tarihli İddianame, s. 1, https://storage.courtlistener.com/recap/gov.uscourts.flmd.433581/gov.uscourts.flmd.433581.1.0.pdf (Erişim Tarihi: 14.08.2025).
Florida Orta Bölge Mahkemesi, Garcia v. Character Technologies davasında verilen 20 Mayıs 2025 tarihli Karar, s. 31, https://storage.courtlistener.com/recap/gov.uscourts.flmd.433581/gov.uscourts.flmd.433581.115.0.pdf (Erişim Tarihi: 14.08.2025).
Florida Aldatıcı ve Haksız Ticaret Uygulamaları Yasası (FDUTPA), http://www.leg.state.fl.us/statutes/index.cfm?App_mode=Display_Statute&URL=0500-0599/0501/0501PARTIIContentsIndex.html (Erişim Tarihi: 17.08.2025).
“The Story of ELIZA: The AI That Fooled the World”, London Intercultural Academy, https://liacademy.co.uk/the-story-of-eliza-the-ai-that-fooled-the-world/?v=e7d707a26e7f (Erişim Tarihi: 17.08.2025); Dave Bergmann, “İş Yerinde ELIZA Etkisi: Yapay Zekâ ‘İş Arkadaşlarına’ Duygusal Bağ Kurmaktan Kaçınmak”, IBM, https://www.ibm.com/think/insights/eliza-effect-avoiding-emotional-attachment-to-ai (Erişim Tarihi: 17.08.2025); Brian Christian, “Bir Google Çalışanı ELIZA Etkisine Nasıl Kapıldı?”, The Atlantic, https://www.theatlantic.com/ideas/archive/2022/06/google-lamda-chatbot-sentient-ai/661322/ (Erişim Tarihi: 17.08.2025).
Florida Aldatıcı ve Haksız Ticari Uygulamalar Yasası (FDUTPA), “haksız” terimini tanımlamasa da, Florida mahkemeleri bu terimi yorumlarken federal içtihatlara başvurmaktadır. Bu tanım, ABD Yüksek Mahkemesi’nin FTC v. Sperry & Hutchinson Co., 405 U.S. 233 (1972) kararında ortaya koyduğu standartlara dayanmaktadır. Florida mahkemelerinin bu standardı benimsediği bir örnek için bkz. Rollins, Inc. v. Butland, 951 So. 2d 860, 869 (Fla. 2d DCA 2006) [Florida 2. Bölge Temyiz Mahkemesi, 2006]; Washington v. LaSalle Bank Nat’l Ass’n., 817 F. Supp. 2d 1345, 1350 (S.D. Fla. 2011) [ABD Florida Güney Bölgesi Federal Mahkemesi, 2011].
Alex Pickett, “Florida Hakimi: Yapay Zekâ Sohbet Botları Birinci Değişiklik Kapsamında Korunmuyor”, Courthouse News Service, https://www.courthousenews.com/florida-judge-rules-ai-chatbots-not-protected-by-first-amendment/ (Erişim Tarihi: 14.08.2025).
Bkz. Metropolitan Life Ins. Co. v. McCarson, 467 So. 2d 277, 278-79 (Fla. 1985) [Florida Yüksek Mahkemesi, 1985], (sorumluluğun ancak davranışın “tüm olası edep sınırlarının ötesine geçtiği ve uygar bir toplumda iğrenç ve kesinlikle tahammül edilemez olarak kabul edildiği” durumlarda doğabileceğini belirten karar)
Ayrıca bkz. Koutsouradis v. Delta Air Lines, Inc., 427 F.3d 1339, 1345 (11th Cir. 2005) [ABD 11. Bölge Temyiz Mahkemesi, 2005], (Florida Hukuku’na göre bu standardın ne kadar yüksek olduğunu ve sıradan hakaretlerin veya aşağılamaların bu eşiği aşmadığını teyit eden karar).Bu kural, Restatement (Second) of Torts § 46(2) [Haksız Fiiller Hukuku İkinci Derlemesi]’den türetilmiştir. Florida mahkemeleri, bir aile üyesinin başka bir aile üyesine yönelik aşırı ve feci bir davranış nedeniyle IIED talebinde bulunabilmesi için, o aile üyesinin olay anında “hazır bulunması” (presence) gerektiğini tutarlı bir şekilde belirtmiştir. Bkz. M.M. v. M.P.S., 556 So. 2d 1140, 1141 (Fla. 3d DCA 1989) [Florida 3. Bölge Temyiz Mahkemesi, 1989], (çocuğunun cinsel istismara uğradığını sonradan öğrenen ebeveynlerin, olay anında orada olmadıkları için IIED talebinde bulunamayacaklarına hükmeden karar); Baker v. Fitzgerald, 573 So. 2d 873, 873 (Fla. 3d DCA 1990) [Florida 3. Bölge Temyiz Mahkemesi, 1990], (eylemin doğrudan davacının kendisine yönelik olmaması nedeniyle IIED iddiası temelli davanın reddedildiği karar).
Character Technologies, Inc, 24.01.2025 tarihli Davanın Düşürülmesi Talebi Dilekçesi, s. 6, https://storage.courtlistener.com/recap/gov.uscourts.flmd.433581/gov.uscourts.flmd.433581.59.0.pdf (Erişim Tarihi: 14.08.2025)
Character Tech., Davanın Düşürülmesi Talebi Dilekçesi, s. 7; Pickett, “Florida judge rules”.
Character Tech., Davanın Düşürülmesi Talebi Dilekçesi, s. 11.
Character Tech., Davanın Düşürülmesi Talebi Dilekçesi, s. 12.
Character Tech., Davanın Düşürülmesi Talebi Dilekçesi, s. 16.
Character Tech., Davanın Düşürülmesi Talebi Dilekçesi, s. 16-17.
Google, 24.01.2025 tarihli Davanın Düşürülmesi Talebi Dilekçesi, s. 17, https://storage.courtlistener.com/recap/gov.uscourts.flmd.433581/gov.uscourts.flmd.433581.61.0.pdf (Erişim Tarihi: 14.08.2025).
Noam Shazeer, 24.01.2025 tarihli Davanın Düşürülmesi Talebi Dilekçesi, s. 5, https://storage.courtlistener.com/recap/gov.uscourts.flmd.433581/gov.uscourts.flmd.433581.65.0_1.pdf (Erişim Tarihi: 14.08.2025).
Daniel De Frietas, 24.01.2025 tarihli Davanın Düşürülmesi Talebi Dilekçesi, s. 6, https://storage.courtlistener.com/recap/gov.uscourts.flmd.433581/gov.uscourts.flmd.433581.63.0.pdf (Erişim Tarihi: 14.08.2025).
Adi Robertson, “Yargıç, Gencin İntiharı Davasında Yapay Zekâ Sohbet Botunun ‘İfade’ Olmadığını Söyledi”, The Verge, https://www.theverge.com/law/672209/character-ai-lawsuit-ruling-first-amendment (Erişim Tarihi: 14.08.2025).
Robertson, “Yargıç, AI Sohbet Botunun ’İfade’ Olmadığını Söyledi”.
DİPNOT




.webp)


