Animated LogoGöksu Safi Işık Attorney Partnership Logo First
Göksu Safi Işık Attorney Partnership Logo 2Göksu Safi Işık Attorney Partnership Logo

Insights
GSI Articletter
GSI Brief

CODED COLLUSION: THE PROBLEM OF ALGORITHMIC TACIT COLLUSION IN COMPETITION LAW

2026 - Winter Issue

Download As PDF
Share
Print
Copy Link

CODED COLLUSION: THE PROBLEM OF ALGORITHMIC TACIT COLLUSION IN COMPETITION LAW

AI Consultancy
2026
GSI Teampublication
00:00
-00:00

Abstract

Artificial intelligence and algorithms increase the risk of tacit collusion in digital markets without human intervention. The article addresses solution proposals such as compliance by design and algorithmic auditing for this new problem, for which traditional competition law rules are proving inadequate.

I. INTRODUCTION

The artificial intelligence and digitalization revolution has turned algorithms, which can perform autonomous pricing in milliseconds by processing big data sets, into a linchpin of the commercial strategies of undertakings. By enhancing market transparency, removing strategic uncertainty, and enabling immediate retaliation from competitors, this technological shift erodes the very foundation of the distinction traditionally made in competition law between cartels established “in smoke-filled rooms” and lawful “conscious parallelism”. Indeed, the prospect of algorithms achieving supra-competitive prices by forging a “coded collusion” among themselves—without requiring any human involvement or explicit concurrence of wills—has transitioned from a theoretical possibility to a reality substantiated by empirical work1.

This new paradigm reveals not only a quantitative increase but also a fundamental qualitative difference from the traditional tacit collusion problem. Whereas in the traditional scenario undertakings passively react rationally to existing and largely external conditions, such as an oligopolistic market structure; in the algorithmic era, they actively construct a digital ecosystem conducive to tacit collusion through autonomous systems designed for profit maximization. This situation seriously challenges the conceptual framework and applicability of existing competition rules, such as Article 101 of the Treaty on the Functioning of the European Union2 (“TFEU”) and Article 4 of the Act No. 4054 on the Protection of Competition3 (“Act No. 4054”), which are fundamentally based on a meeting of the minds or at least communication between undertakings.

The primary objective of this study is to analyze whether autonomous algorithmic tacit collusion, which lacks traditional elements like human will and communication, can be comprehended within the existing legal framework and how competition law policy should adapt to this new situation. Indeed, the opaque decision-making processes of “black box” machine learning algorithms bring with them fundamental problems such as who will be held liable for an anticompetitive outcome and how this will be proven, pushing competition authorities to develop new approaches beyond traditional evidence-gathering and economic analysis methods, such as “computational antitrust”4. Within this context, after addressing the subject with its legal and economic foundations, the article will evaluate the potential of existing tools like the “presumption of concerted practice”, which is also present in Turkish competition law, and ex ante regulatory solutions such as “compliance by design” from a comparative perspective, thereby presenting concrete proposals for the future of competition policy.

II. THE LEGAL AND ECONOMIC FOUNDATIONS OF HORIZONTAL COLLABORATION

To comprehend the new legal and economic problems posed by algorithmic tacit collusion, it is first necessary to examine how competition law traditionally approaches horizontal collaborations and the fundamental concepts upon which this approach is based. Competition law is built on a principle of autonomy, where each undertaking determines its own commercial decisions independently of its competitors. The violation of this principle is prohibited under Article 4 of Act No. 4054 as “agreements, concerted practices, and decisions of associations of undertakings” that have the object or effect of restricting competition. The foundation of this legal framework is the existence of a meeting of the minds between undertakings. Even a concerted practice, which refers to a conscious practical cooperation between undertakings against the risks of competition without amounting to an agreement, requires direct or indirect communication to occur. However, there is a limit to these prohibitions: competition law does not prevent undertakings from “intelligently adapting” to the existing or anticipated conduct of their competitors. Therefore, conscious parallelism and/or tacit collusion, where undertakings exhibit parallel pricing behaviors by rationally analyzing market conditions without any communication, are traditionally not considered an infringement. The fundamental challenge posed by algorithms arises precisely in this grey area, at the point where the elements of communication and intent become blurred.

The basis for this legal distinction lies in the economic rationality that emerges in certain market structures. Traditional tacit collusion is generally seen as a natural outcome of oligopolistic markets characterized by features such as few players, high barriers to entry, product homogeneity, and market transparency5. In such markets, undertakings are aware that each decision they make will directly affect the others and that their own profits depend on the behavior of their rivals; this situation is termed “oligopolistic interdependence”. A “repeated game” dynamic prevails, where the short-term gain of price-cutting is nullified by the immediate retaliation of competitors, leading to losses for all players in the long run. This situation, modeled by game theory’s “Prisoner’s Dilemma”, reveals that explicit communication is not always necessary to reach an anticompetitive equilibrium; the market structure itself can function as a coordination mechanism. Algorithms carry the potential to make these market dynamics rational even in markets previously not conducive to tacit collusion, especially by increasing transparency and reaction speed.

The fundamental reason behind the prohibition of horizontal collaborations by competition law is their negative economic effects on market efficiency and consumer welfare. Collusion between undertakings disrupts the functioning of the competitive process, leading to a series of harmful consequences for consumers. Foremost among these are prices above competitive levels, lower output levels, and a reduction in quality and innovation. This situation leads to the inefficient allocation of resources and an unjust transfer of consumer surplus to producers. Given that competition is accepted as a dynamic force that incentivizes undertakings to be more efficient, reduce costs, and innovate, it is clear that collusion eliminates these incentives. Consequently, the potential of algorithms to facilitate and proliferate tacit collusion brings with it the risk that these negative economic effects will emerge on an unprecedented scale and in a manner more difficult to detect, making the issue urgent and important from a competition policy perspective6.

III. THE DIGITAL REVOLUTION: PRICING ALGORITHMS AND THEIR MECHANISMS OF OPERATION

A. The Technical Structure and Types of Algorithms

To analyze algorithmic strategies from a competition law perspective, it is essential to understand the fundamental technical structure and functional types of these technologies. In its most general definition, an algorithm is a set of specific mathematical rules created to produce an automated result based on input data and various parameters. While it is not a new phenomenon for firms to collect data and make commercial decisions by analyzing it, what is new is the revolutionary increase in the capacity to collect, process, and analyze data through advanced algorithms. At the center of this revolution is artificial intelligence, which performs complex tasks by mimicking human intelligence. Artificial intelligence encompasses sub-technologies such as machine learning, which attempts to complete a task by learning from data and experience, and deep learning, which learning through artificial neural networks that mimic the activities of human neurons7. This technological evolution has led to a critical distinction from a competition law perspective: simple, rule-based algorithms versus autonomous, self-learning algorithms.

Algorithms are fundamentally divided into two categories based on their level of autonomy in decision-making processes. Adaptive or rule-based algorithms, also known as “first-generation”, execute a predefined set of commands. For example, an undertaking can program its algorithm with simple and static rules such as “match the competitor’s price”, “increase the price by 15% when stock reaches a certain level”, or “set the price 1% lower than the lowest-priced competitor”8. In such algorithms, the undertaking’s competitive strategy and intent are clearly visible in the algorithm’s code and can be retrospectively traced in a legal investigation.

In contrast, machine learning algorithms or “black box” algorithms, referred to as “second-generation”, use a trial-and-error method to perform a given task and continuously develop their strategies by learning from their experiences. These algorithms are given only a general objective, such as profit maximization, and they find the most optimal path to achieve this goal on their own, without human intervention. In this process, they establish a dynamic balance between exploiting known profitable strategies and exploring potentially more profitable new ones9. Examples such as Google’s AlphaGo defeating the world champion demonstrate the capacity of these algorithms to develop strategies beyond human foresight. This autonomy, in the event of an anticompetitive outcome, gives rise to the black box problem, which makes the determination of intent and liability extremely complex and creates a significant information gap for competition authorities10.

In addition to their technical structures, algorithms can also be classified according to their functions from a competition law perspective. Monitoring algorithms enable the acquisition of real-time and accurate information about competitors’ strategic behaviors, such as price, stock, and product portfolio. These algorithms serve as a fundamental tool that facilitates the sustainability of collusion by instantly detecting deviations from an anticompetitive agreement. Signaling algorithms, on the other hand, serve to eliminate the uncertainty and cost required for coordination by indirectly communicating intentions, such as a future price increase, to competitors11. Parallel or pricing algorithms are the software that automates the price-setting process based on this collected data and other market parameters. The case of the book The Making of a Fly on Amazon, where its price skyrocketed to $23 million due to two sellers’ algorithms being indexed to each other’s prices, is a striking example of how these algorithms can produce parallel outcomes without any need for communication12. These algorithms with different functions often work in concert, creating a complex and dynamic ecosystem that increases the risk of collusion in digital markets.

B. The Emergence Mechanisms of “Coded Collusion”

The self-learning capacity and technical structure of algorithms lay the groundwork for the emergence of a new collusion mechanism referred to as coded collusion. This mechanism is based on automating the two most fundamental functions of a cartel without a traditional meeting of the minds or communication between undertakings: the monitoring of collusion and the punishment of deviations. The first and most fundamental pillar of this process is the establishment of almost absolute market transparency through monitoring algorithms. This super-transparency perfects one of the most critical conditions for the sustainability of collusion -namely, the ability to observe competitors’ behavior and detect deviations from the agreement- cost-effectively and without the need for human intervention13.

This automated surveillance creates an even more powerful collusion mechanism when combined with signaling and punishment functions that operate at an unprecedented speed. Algorithms make it possible to probe the ground for anticompetitive coordination by sending costless signals, such as price increases that are brief enough to be easily read by competitors but go unnoticed by consumers. If this signal is not followed, or if an undertaking deviates from the collusive equilibrium by cutting prices, the retaliation mechanism is activated within seconds. The time window that provides profit to the cheating undertaking in traditional markets, which exists between deviation and retaliation, is reduced to almost zero in algorithmic markets. This significant shortening of the time between the detection of deviations and retaliation makes deviation economically irrational, turning adherence to the collusive equilibrium into the sole rational strategy14. The combination of these two mechanisms reduces the likelihood of undertakings entering a price war, thereby creating a sustainable environment for tacit collusion even in markets that were not previously oligopolistic. Undertakings gain the ability to achieve supra-competitive profits through algorithmic strategies that appear to be rational behavior, without taking the risk of entering into an illegal agreement. This situation creates a new challenge for competition authorities that is both difficult to detect and complex to legally characterize15.

C. The “Black Box” Problem: The Issue of Transparency and Explainability of Algorithmic Decision-Making Processes

The most fundamental legal challenge created by coded collusion mechanisms stems from the nature of machine learning algorithms, particularly those referred to as second-generation. Unlike rule-based algorithms, the decision-making processes of these autonomous systems, which operate with machine learning and deep learning methods, have a black box nature that is opaque even to their designers. In this scenario, conceptualized by Ezrachi and Stucke as the “digital eye”16, the algorithm is equipped with a general objective such as profit maximization and learns autonomously through trial-and-error to find the most suitable strategy to achieve this goal. During this process, how the algorithm analyzes which data, what weight it assigns to which variables, and how it reaches its final decision cannot be observed from the outside and cannot be fully explained retrospectively.

This lack of transparency and explainability risks rendering fundamental concepts of competition law, such as intent and causality, dysfunctional. In a traditional cartel investigation, competition authorities try to detect the intent to restrict competition or a meeting of the minds by examining inter-undertaking communication, meeting notes, or emails. In the black box problem, however, the undertaking can argue that the algorithm was equipped with the principle of profit maximization -a legitimate and neutral objective from a competition law perspective- and that the resulting collusive outcome was the product of an unforeseen and unintended autonomous learning process. This defense severs the causal link between the undertaking’s action and the anticompetitive result, creating a significant lacuna as to whom liability should be attributed17.

The primary counter-argument developed against this legal lacuna is that undertakings are under a duty of care and diligence and should be held liable if they could reasonably foresee that their algorithms could reach an anticompetitive outcome and were prepared to take that risk. According to this approach, an undertaking that introduces such a powerful and unpredictable technological tool into the market must also bear the risk of its potential negative consequences. However, the limits of this foreseeability and how they can be proven constitute the most fundamental challenge that the black box problem poses for competition authorities18. This situation is pushing authorities to seek new and proactive methods, such as auditing whether undertakings comply with compliance by design principles during the algorithm design process.

IV. ALGORITHMIC COLLABORATION: SCENARIOS AND CASE ANALYSES

Algorithmic collaboration refers not to a single phenomenon, but to a complex spectrum of phenomena that differ according to the degree of human intervention and communication. The analysis of this spectrum is of critical importance for competition law; for as the role of the algorithm transforms from a simple tool into an autonomous decision-maker, the applicability of traditional legal concepts such as agreement and meeting of the minds becomes correspondingly more difficult. In this section, the roles of algorithms in anticompetitive agreements will be addressed along a classification axis extending from explicit agreements where human will is central, to tacit collusion created by fully autonomous systems.

A. Theoretical Scenarios

1. Explicit Agreements: The Messenger and Hub-and-Spoke Models

The simplest and legally clearest scenarios in which algorithms are used for anticompetitive purposes are explicit agreements, which are based on human will and communication. In the scenario referred to as the “messenger” in the literature19, undertakings come together through traditional means to form a cartel agreement and delegate tasks such as implementing the agreement, monitoring market prices, and punishing those who deviate from the agreement to an algorithm. In this case, the algorithm is an extension and an instrument of the anticompetitive will; therefore, such an action is considered a classic cartel under existing competition law rules and creates no new legal problem. The Topkins case in the US and the Trod/GBE case in the UK are concrete examples of this model, where competitors, after reaching an agreement, used common pricing software to implement it20.

In the more complex Hub-and-Spoke model, anticompetitive coordination is achieved indirectly through a common third party. In this model, an undertaking at the hub” position (usually a software provider or a marketplace platform) facilitates collusion among competing undertakings at the “spoke” position by enabling the flow of competitively sensitive information or by applying common pricing rules21. The critical legal point of this scenario is whether the spoke undertakings are aware that the hub is acting as part of an anticompetitive plan and that other competitors are also involved in this plan. This awareness threshold is the fundamental element that determines whether the action is a lawful series of vertical relationships or an illegal horizontal concerted practice. As established by the Court of Justice of the European Union (“CJEU”) in the Eturas case, the existence of a concerted practice can be accepted if undertakings know or can reasonably foresee that a common system is leading to a restrictive change in competition and do not explicitly reject this practice22. Therefore, in cases where the spoke undertakings possess the necessary awareness, the software provider or platform at the hub position is held liable as a facilitator of the cartel, while the spoke undertakings are considered part of the concerted practice.

2. Tacit Collusion: Parallel Algorithms and Autonomous Coordination

The real challenge for competition law begins in scenarios of tacit collusion that arise from the unilateral use of algorithms, without an explicit agreement or communication between undertakings. In this model, termed the “predictable agent”, each undertaking independently programs its own algorithm to provide rational and predictable responses to market signals (e.g., a competitor’s price increase). The use of algorithms following similar rational strategies by multiple undertakings in the market leads to prices stabilizing at a supra-competitive level, without any communication. The fundamental difference of this scenario from traditional tacit collusion is that undertakings do not merely react passively to existing market conditions; they actively create the market conditions (absolute transparency, instant retaliation capacity, etc.) that allow this type of tacit collusion to emerge by using algorithms23.

At the most abstract and technologically advanced end of the spectrum lies the autonomous coordination scenario, referred to as the “digital eye”. In this model, machine learning algorithms, in order to achieve a general objective given to them, such as profit maximization, learn on their own through trial-and-error and without human intervention that the most optimal strategy is to act collusively with competitors. Artificial intelligence examples like Libratus, which defeated the world’s best poker players, show that machines can develop complex strategies independent of human will and can go beyond human foresight24. Although there are justified doubts that this scenario is still theoretical in real markets25, the question of who will be held responsible for the actions of these black box algorithms constitutes one of the most fundamental future challenges for competition law. As these autonomous scenarios do not contain elements such as a meeting of the minds or communication, they push the limits of traditional prohibitions under Article 4 of Act No. 4054 and Article 101 of the TFEU forming the essence of the algorithmic tacit collusion problem.

B. Judicial Reflections of Algorithmic Collusion: An Analysis of Precedent

1. Significant Examples from International Practice

The classic examples of the messenger scenario are the Topkins case in the US and the Trod-GBE case in the UK. In both cases, authorities proved the existence of a cartel agreement based on traditional evidence such as email correspondence between undertakings and ruled that the algorithms were a tool used to implement this agreement. These decisions have shown that when there is an explicit agreement based on human will, the role of the algorithm does not change the legal nature of the act, and that existing legal tools are sufficient to capture such traditional infringements26. The most important example of the more complex and indirect Hub-and-Spoke model is the CJEU Eturas decision. This decision created an important precedent by establishing that user undertakings who know or should have known about the restrictive actions of a common platform and do not object to them can also be held responsible27. The decision, by centering on the concepts of tacit approval and not explicitly opposing, has drawn a legal framework for liability for indirect coordinations that occur through platforms and software providers.

An important sign that algorithmic signaling could be considered a concerted practice emerged in the European Commission’s Container Shipping decision. In this case, the European Commission found that undertakings learned of each other’s intentions by publicly announcing future price increases and eliminated strategic uncertainty. This approach signals that if algorithms send signals that are rapid and complex enough to be understood by competitors but not noticed by consumers, this action could be characterized as a concerted practice28. In the area of vertical restraints, decisions by the European Commission, such as Asus, have concretely shown how providers use monitoring algorithms as an enforcement tool to supervise resale price maintenance29. These cases reveal that although algorithms make the detection of competition law infringements more difficult, authorities are making efforts to adapt existing legal concepts to this new technological reality.

2. The Approach of the Turkish Competition Authority and Relevant Decisions

The Turkish Competition Authority (the “Authority”) has also placed algorithmic behaviors in digital markets on its agenda and has begun to apply the existing legal framework to these new phenomena. Although the Authority does not yet have a direct decision on algorithmic tacit collusion, there are important precedents that shed light on its future approach. These decisions demonstrate both the Authority’s potential to interpret existing legal tools flexibly and its growing awareness of the new competition problems brought by the digital age.

The most significant indicator of this potential is the presumption of concerted practice in Article 4 of Act No. 4054. This presumption, even if the existence of an agreement cannot be proven, shifts the burden of proof to the undertakings if the price movements in the market resemble a collusive structure rather than a competitive one and the undertakings cannot explain this situation with rational economic justifications. The Authority’s past decisions, such as Maya and Göltaş30, where it invoked this presumption based solely on economic evidence without any communication evidence, indicate that a similar approach could be adopted in the face of unexplainable parallel price movements created by black box algorithms in the future. This mechanism theoretically carries the potential to allow authorities to intervene even in cases where they cannot access traditional evidence such as intent or communication.

A more recent and directly relevant development is the Authority’s HTM Retailers decision. In this decision, the Authority, for the first time in Turkish competition law practice, explicitly identified a Hub-and-Spoke cartel, ruling that a supplier had facilitated collusion among retailers by enabling the flow of competitively sensitive information31. This decision is an important precedent for future algorithmic Hub-and-Spoke cases as it clearly demonstrates the Authority’s will to investigate and penalize indirect coordinations that occur through a common third party (a supplier, a platform, or software), even in the absence of direct communication between competitors.

Finally, the interim measure decision concerning Trendyol, although not a direct collusion case, shows the Authority’s willingness to intervene in the functioning of algorithms. In the decision, the allegation that the platform favored its own products through “interventions via algorithms and coding that would provide an advantage against its competitors” was scrutinized32. This decision is important as it demonstrates the Authority’s capacity and intention to examine the technical infrastructure of digital platforms, described as black boxes, and to directly intervene in algorithmic manipulations it finds to be distorting competition. When these decisions are evaluated as a whole, it is seen that the Authority has a growing awareness of the dynamics of digital markets and the effects of algorithms on competition, and is ready to adapt its existing tools to these new problems.

V. LEGAL PROBLEMS: THE TEST OF THE EXISTING FRAMEWORK AGAINST “CODED COLLUSION”

A. Conceptual Insufficiency: The Limits of the “Agreement” or “Concerted Practice” Concepts

The most fundamental challenge posed by algorithmic collaboration is its incompatibility with the conceptual framework upon which the core prohibitions of competition law are built. Article 4 of Act No. 4054 characterizes restrictive conduct as an agreement or a concerted practice, and at the center of both concepts lies a conscious interaction and a common will between undertakings.

The concept of an agreement requires a meeting of the minds where undertakings express their common intention to act in a specific manner. However, in autonomous scenarios such as the “digital eye”, algorithms can reach a supra-competitive equilibrium as a result of their own learning processes, without human intervention and without a pre-programmed collusive instruction. In this situation, it is not possible to speak of a will or intent behind the actions of the machines33, which prevents the concept of an agreement from encompassing this new phenomenon.

The concept of a concerted practice, which comes into play when an agreement cannot be proven, is defined as “direct or indirect relations between undertakings which, without having reached the stage of an agreement, provide a coordination or practical cooperation that replaces their independent behaviors”. The key element of this definition is a communication aimed at eliminating strategic uncertainties about the future conduct of competitors34. However, in the predictable agent model, each undertaking unilaterally deploys its algorithm that provides rational responses by analyzing market data; the coordination between algorithms is not the result of pre-established communication but of mutual observation and reaction taking place in the market itself.

This situation blurs the line between legally permissible conscious parallelism and a prohibited concerted practice. The limit drawn by the existing legal framework at this point is the area deemed legitimate as conscious parallelism or oligopolistic interdependence, where undertakings intelligently adapt to the conduct of their competitors35. The problem created by algorithms is that they make the distinction between this legitimate adaptation and prohibited coordination almost impossible. This is because algorithms, by perfecting the observation and reaction mechanisms that are flawed even in traditional oligopolistic markets, carry the risk of turning every market into a potential area for tacit collusion. This situation gives rise to the problem of theexpansion of the grey area between agreement and tacit collusion36. In conclusion, it is clear that concepts such as will, intent, and communication, which are built upon human psychology and interaction, fall short in the face of coded collusion operating with the autonomous and data-driven logic of machines, and that the existing legal framework struggles to meet this new challenge.

B. The Problem of Attributing Liability

The inadequacy of the concepts of agreement and concerted practice in the face of autonomous algorithmic behaviors directly gives rise to the problem of attributing liability. Competition law liability is traditionally imposed on the undertaking that commits the unlawful act or participates in it. However, when there is no direct human will behind a collusive outcome, as in the “digital eye” scenario, the question of whom liability should be assigned becomes complex: is the responsible party the undertaking that uses the algorithm, the programmer who designed it, or the algorithm itself? Since an algorithm does not have legal personality, it cannot be considered a legal perpetrator37. Therefore, the debate focuses on the undertaking and the programmer.

Two main approaches stand out in the doctrine regarding the liability of the undertaking. The first and strictest view is based on the principle that the undertaking is objectively liable for all instruments it uses in its commercial activities -including the unforeseen consequences of these instruments. This approach draws a parallel with holding an undertaking liable for competition infringements committed by its employees outside their authority; just as an undertaking “is held responsible for the actions of its employees, it can also be held responsible for the actions of its pricing robots”38. Indeed, as former European Commissioner Margrethe Vestager underlined, “companies can’t escape responsibility for collusion by hiding behind a computer program”. According to this view, no distinction is made between whether the algorithm is adaptive or a machine learning algorithm; the undertaking must bear the consequences of the technology it has put on the market39.

The second, more flexible view links liability to the criterion of foreseeability. According to this approach, an undertaking should be held liable if it could reasonably foresee that the algorithm could lead to an anticompetitive outcome and had accepted this risk. However, drawing the limits of this foreseeability is extremely difficult, especially for black box algorithms. At this point, Calzolari, departing from the traditional approach, argues that undertakings consciously create the market conditions conducive to collusion through algorithms. This argument, by implying that the undertaking is not a passive observer but plays an active role in bringing about the outcome, offers an important perspective that strengthens the presumption of foreseeability40.

The liability of the programmer or the third-party platform generally comes into question in Hub-and-Spoke scenarios, in cases of knowingly and willingly participating in an anticompetitive plan or facilitating collusion. In this case, the third party ceases to be merely a technology provider and becomes a part or the center of the restrictive conduct, and can be held liable for this role41.

C. Evidentiary Difficulties

Even if liability can be theoretically attributed, proving it in an investigation or a court case presents serious difficulties. Traditional competition law investigations rely on “human-produced” evidence such as emails, meeting minutes, and witness testimonies. In algorithmic collaboration, however, such direct evidence is replaced by codes, data sets, and complex statistical models that are extremely difficult to understand and interpret. This situation confronts competition authorities with three fundamental challenges in the core elements of the law of evidence: causality, evidence, and expertise.

First, the problem of establishing a causal link is at the center of algorithmic cases. It is extremely difficult to distinguish whether parallel price increases in a market are caused by the autonomous coordination of competing algorithms or by legitimate economic factors such as cost increases or demand shocks. Since the black box nature of the algorithm prevents the determination of the specific reason behind a price increase, establishing a solid causal chain between the anticompetitive behavior and the market outcome can become nearly impossible. This epistemological rupture requires authorities to close the “knowledge gap”42.

Second, the process of obtaining and examining evidence requires an approach beyond classical methods. The “smoking gun” is no longer an email, but the algorithm itself. Therefore, competition authorities need to examine the algorithm’s source code, the data it uses, its decision-making logic, and its updates. However, this is practically very difficult due to both the undertakings’ demands to protect their trade secrets and the technical complexity of the algorithms. This situation gives rise to the third and most important difficulty: the need for interdisciplinary expertise. To analyze how algorithms work, under what conditions they show a tendency for collusion, and their effects on the market, it is imperative for competition authorities to have teams composed of data scientists, computer engineers, and artificial intelligence experts, in addition to lawyers and economists43.

In the face of these evidentiary difficulties, alternative detection methods that focus on the observable outcomes of the market, rather than the process itself, gain importance. This approach argues that if a market’s behaviors (e.g., restricting supply while demand is increasing, or prices remaining sticky while costs are falling) cannot be explained by competitive logic, this can be used as a strong presumption indicating the existence of collusion, even if the internal workings of the algorithm cannot be understood. Such economic anomalies, which leave behind economic fingerprints, can offer an important starting point for authorities44.

VI. APPROACHES IN COMPARATIVE LAW AND NEW TRENDS

A. The European Union: Evolution from Ex Post Enforcement Limits to Ex Ante Regulation

In the face of the challenges created by algorithmic collaboration, the European Union is pursuing a dual strategy by stretching its existing competition law tools and developing new regulatory approaches. The foundation of traditional ex post enforcement is Article 101 of the TFEU, which prohibits agreements and concerted practices between undertakings. As the application of this provision requires the existence of a meeting of the minds or at least a communication aimed at coordinating market behavior, it risks being conceptual insufficient in the face of fully autonomous algorithmic collusion45.

However, in its Eturas decision, the CJEU took a significant step towards adapting the limits of this concept to the realities of the digital age. The CJEU established a presumption that undertakings which are aware of a restrictive rule introduced by a common online platform and do not explicitly oppose it are deemed to have tacitly participated in a concerted practice46. This decision has strengthened the applicability of Article 101 of the TFEU in indirect collusion scenarios such as Hub-and-Spoke, where platforms and software providers play a central role. Nevertheless, even this precedent, being based on human awareness and tacit consent, falls short of covering fully autonomous “digital eye” scenarios47.

Recognizing these structural limitations of existing ex post tools and the speed of digital markets, the EU has shown a distinct shift towards an ex ante regulatory approach in recent years. The most concrete example of this new approach is the Digital Markets Act (“DMA”), which imposes a series of obligations on large digital platforms defined as “gatekeepers”. Instead of waiting for a specific competition infringement to occur and then penalizing it, the DMA aims to keep the market structure and functioning fair and contestable from the outset. The Act presents a clear list of rules that gatekeepers must comply with, such as prohibiting self-preferencing or mandating data portability48.

Although the DMA does not directly target algorithmic tacit collusion, it represents a significant mindset shift in the EU’s competition policy. During the preparatory process of this law, a “new competition tool” was proposed by the EU Commission to directly target structural competition problems like algorithmic collusion, but this tool was later relegated to a more limited role within the scope of the DMA. This process reveals the authorities’ will to no longer be content with a merely retrospective punitive role, but to intervene in the functioning of the market in a proactive and regulatory manner49. This indicates that similar ex ante regulations targeting specific problems like algorithmic collusion may also be on the agenda in the future.

B. The United States: The Sherman Act, DOJ/FTC Guidelines, and Precedent Development Through Litigation

In contrast to the European Union’s shift towards regulatory and preventive approaches, the United States’ struggle against algorithmic collaboration is based on the traditional ex post model of interpreting and applying existing antitrust laws through litigation. The cornerstone of US antitrust law, Section 1 of the Sherman Act, prohibits “agreements, combinations, or conspiracies” that restrain trade. At the center of this legal framework is the element of agreement, which, similar to the meeting of the minds in EU law, means a “conscious commitment to a common scheme”50. For this reason, US courts do not consider mere parallel behaviors, i.e., conscious parallelism, as an infringement on its own; to establish the existence of an agreement, they seek the presence of plus factors that explain this similarity and go beyond rational, unilateral behavior51.

The US enforcement bodies, the Department of Justice (“DOJ”) and the Federal Trade Commission (“FTC”), have repeatedly stated that they are vigilant against the risks created by algorithms. However, this approach focuses on adapting the traditional elements of an infringement to the digital environment rather than going beyond the existing legal framework. For instance, in the US v. Topkins case, the DOJ secured a conviction by proving that competing sellers had made a price-fixing agreement and used a common pricing algorithm to implement this agreement, meaning the algorithm served as a messenger52.

Similarly, in the Airline Tariff Publishing case, which can be considered a precursor to modern algorithmic signaling, it was concluded that airline companies had reached an agreement by announcing future price increases to each other through a common digital system. These cases show that the US approach, despite the complexity of the technology, is based on the strategy of proving the existence of an agreement, if necessary through indirect evidence and “plus factors”53.

This reactive and litigation-focused model, although effective in punishing concrete infringements, remains limited in proactively shaping the market structure and preventing the emergence of tacit collusion from the outset. Indeed, DOJ and FTC officials have stated that, according to existing laws, mere algorithmic tacit collusion that does not involve an element of agreement could be seen as a rational outcome of the market and therefore may not be illegal54. This situation shows that US law, in the face of autonomous scenarios like the “digital eye” where human will and communication are completely out of the equation, faces a more pronounced legal gap compared to the EU.

C. Other Leading Countries (United Kingdom, Germany): Proactive Sector Inquiries and Data-Driven Enforcement

Between the European Union’s broad ex ante regulatory trend and the United States’ litigation-driven ex post model, other leading countries like the United Kingdom and Germany are adopting a more proactive, research-based, and flexible middle way. The basis of this approach is to deeply understand the competition problems created by algorithms and to make targeted interventions suitable for specific market dynamics. The competition authorities of these countries are guiding global debates by conducting comprehensive studies on the collusion risk of algorithms.

The most advanced example of this model is the market investigation tool possessed by the UK Competition and Markets Authority (“CMA”). This tool gives the CMA the authority to investigate markets where competition is not functioning effectively and to impose behavioral or structural remedies to address the adverse effect on competition, without the obligation to prove the existence of a specific competition infringement55. This proactive approach is extremely suitable for addressing structural problems like algorithmic tacit collusion, which do not fit into the traditional patterns of agreement or concerted practice. This is because this tool allows for bypassing the difficult-to-prove elements of intent or communication and focusing directly on the anomalies in the market’s functioning. The CMA has combined this authority with the technical capacity provided by its Data, Technology and Analytics (DaTA) unit56, consisting of data scientists and technology experts, thereby creating a data-driven enforcement model.

Similarly, the German Federal Cartel Office (“BKA”), often in cooperation with the French Competition Authority (“ADLC”), is leading global discussions by publishing in-depth technical and legal reports on the effects of algorithms on competition. These studies aim not to resolve a specific case, but to build a knowledge base for policy-making and future enforcement activities. This proactive and knowledge-based approach allows authorities to understand the complexity of digital markets, to identify potential risks at an early stage, and to develop flexible solutions suitable for the specific conditions of the market without being overly interventionist. This model is gaining increasing importance in the face of the dynamic and uncertain problems brought by the algorithmic age.

D. The Approach of the Turkish Competition Authority: Existing Tools and Future Perspective

The Authority is closely monitoring the dynamics of digital markets and the competition problems created by algorithms, and is adopting a proactive enforcement approach in this area. The Authority has taken fundamental steps towards understanding the functioning of the market with studies such as the E-Marketplace Platforms Sector Inquiry57. Although there is not yet a direct decision on algorithmic tacit collusion, the Authority’s potential and will to adapt existing legal tools to these new problems are clearly emerging in recent decisions.

The most significant indicator of this potential is the presumption of concerted practice in Turkish competition law. This presumption, regulated in Article 4 of Act No. 4054, shifts the burden of proof to the undertakings in cases where market behaviors cannot be explained by rational economic justifications. This tool can, theoretically, provide the Authority with a significant evidentiary advantage in the face of unexplainable parallel price movements created by black box algorithms, which undertakings defend as an autonomous learning process58. The Authority’s exceptional past decisions, where it invoked this presumption based solely on economic evidence without any evidence of communication, indicate that a similar approach could be adopted for algorithmic cases in the future.

More importantly, the Authority has recently intervened directly in the matter by opening ex officio investigations into the automated pricing mechanisms offered by e-marketplace platforms. In these investigations, the Authority was concerned that options such as “Match the Buybox Price” offered by platforms to sellers could create a focal point for coordination among sellers, leading to price rigidity and tacit collusion. These mechanisms were considered to carry the risk of creating a Hub-and-Spoke structure, with the platform as the hub and the sellers as the spokes. The Authority’s decision to conclude its investigations into major platforms like Hepsiburada and Trendyol with behavioral commitments, such as the removal of the “Match the Buybox Price” option59, shows that it is adopting a proactive, ex ante market regulation approach instead of relying on an ex post infringement findings and fines. This is a flexible and solution-oriented strategy, akin to the philosophy of the market investigation tool in the United Kingdom.

Furthermore, decisions such as the Authority’s interim measure decision concerning Trendyol reveal that the Authority is concerned not only with the risks of multilateral collusion but also with unilateral conduct, such as a platform manipulating its algorithm to favor its own products60. This situation demonstrates that the Authority has an increasing institutional capacity and will to intervene in the technical functioning of algorithms and to understand the complex dynamics of digital markets. In conclusion, Turkey’s approach is shaping up as a hybrid model that, on one hand, retains powerful ex post tools like the presumption of concerted practice, while on the other, engages in proactive and market design-oriented interventions through the commitment mechanism.

VII. SOLUTIONS AND THE FUTURE OF COMPETITION POLICY (DE LEGE FERENDA)

The unique challenges posed by algorithmic tacit collusion require competition policy to adopt a holistic approach that goes beyond traditional ex post intervention tools to include proactive and technology-focused ex ante solutions. This section will address both proposals for the adaptation of the existing legal framework and new regulatory mechanisms that will shape the future of competition policy.

A. Ex Post Solutions: Expanding the Interpretation of Existing Rules and Revising Evidentiary Standards

The first and most natural step in combating algorithmic collaboration is to subject existing competition law rules, particularly the concept of a concerted practice, to a flexible interpretation that aligns with the realities of the digital age. The basis of this interpretation is the fact that undertakings actively and consciously create a market environment conducive to collusion through algorithms61. This approach considers the actions of undertakings not as a simple intelligent adaptation, but as an indirect communication or practical cooperation aimed at eliminating uncertainty about the future conduct of competitors. This interpretation accepts the continuous observation of each other’s prices by algorithms and the instant reaction to them as an implicit communication channel between undertakings.

One of the most concrete legal tools that provides a basis for such an interpretation is the presumption of concerted practice found in Article 4 of Act No. 4054 in Turkish competition law. This presumption, which shifts the burden of proof to undertakings in cases where market behavior cannot be explained by other rational justifications, may allow authorities to intervene even when they cannot fully decipher the inner workings of a black box62.

In addition to the interpretation of existing rules, it is necessary to revise evidentiary standards to overcome the black box problem. In the absence of traditional evidence of intent or will, it is a necessity for competition authorities to shift their focus of proof from the internal workings of algorithms to the observable outcomes of the market. This results-oriented approach63 makes it more difficult for undertakings to hide behind the defense that “my algorithm made an autonomous decision”. If the outcome produced by an algorithm cannot be explained by a rational competitive justification, this should create a rebuttable presumption that the undertaking at least should have foreseen such an outcome when deploying the algorithm and assumed this risk. In this context, the inability to explain pricing behaviors in a market (for example, prices remaining constant while costs are falling or increasing in a synchronized manner) with a competitive model can be accepted as a strong plus factor or an economic fingerprint indicating the existence of collusion64.

B. Ex Ante Solutions: Algorithmic Transparency and “Compliance by Design”

Considering the evidentiary difficulties of ex post interventions and the speed of digital markets, it is inevitable for competition policy to increasingly shift its focus to ex ante and preventive solutions. The first and most fundamental step in this area is the introduction of obligations for Algorithmic Transparency and Explainable AI (XAI). This is an approach that directly targets the black box problem. Undertakings should be obligated to be able to explain to competition authorities the basic decision-making parameters of the algorithms they use, how they use which data, and how they manage potential competition risks. This allows authorities to conduct algorithm audits not to investigate a specific competition infringement, but to understand the functioning of the market and proactively identify potential risks. However, it should not be forgotten that algorithmic transparency is a double-edged sword; for excessive transparency also carries the risk of facilitating collusion by allowing competitors to more easily decipher each other’s strategies. Therefore, transparency obligations must be carefully designed to protect trade secrets and enhance competition65.

A more advanced and transformative step is the adoption of the compliance by design principle, which fundamentally changes the paradigm of liability. Inspired by the “privacy by design” principle in data protection law, this approach66 imposes a positive obligation on undertakings to design their algorithms to be compliant with competition law rules from the outset and to take all reasonable measures to ensure this compliance. With this principle, the burden of proof is effectively reversed: rather than the competition authority proving an infringement, the undertaking is expected to prove that it has shown the necessary diligence to prevent its algorithm from leading to an anticompetitive outcome. This approach largely eliminates the defense of “I didn’t know what my algorithm would do” and forces undertakings to conduct proactive risk management regarding the potential externalities of the technology they use67.

However, in the face of the reality that even the best-designed algorithms can produce unexpected collusive results, the principle of compliance by design must be complemented by the observability of outcomes. According to this complementary principle, when an undertaking notices or should notice that its algorithm is creating an anticompetitive result (for example, a market-wide profit increase that cannot be explained by costs), it cannot continue to benefit from this situation; it comes under the obligation to intervene to restore competitive conditions in the market68.

Finally, one of the most proactive solution proposals is the introduction of a mandatory testing and approval mechanism for algorithms in certain risk categories, as in the MiFID II regulation for algorithmic trading in financial markets. In this model, undertakings operating in high-risk markets could be required to test their pricing algorithms in a “sandbox” environment to show that they do not lead to collusion before releasing them to the real market, and to submit the results of these tests to the competition authority69. Such ex ante mechanisms represent important steps that evolve competition policy from a purely punitive role to a regulatory one that proactively shapes the fair and competitive functioning of markets.

C. Institutional Solutions: Enhancing the Capacity of Competition Authorities and New Powers

The effective implementation of ex post and ex ante solutions depends on the structural and jurisdictional adaptation of the institutions that will bring these solutions to life, namely the competition authorities, to the requirements of the digital age. It is imperative for competition authorities, traditionally composed of lawyers and economists, to undergo an institutional transformation to be able to understand and supervise the complex technical structure of algorithms and their data-driven dynamics.

The first and most fundamental step of this transformation is the strengthening of the technological and data science capacity of competition authorities. This is possible not only through the training of existing personnel but also by incorporating data scientists, computer engineers, and artificial intelligence experts into the institution. The “Data, Technology and Analytics” (DaTA) unit established within the CMA is a concrete step taken in this direction and serves as a model for other authorities70. Such interdisciplinary teams will enable authorities not only to investigate suspicious behaviors but also to develop a computational antitrust capacity by using artificial intelligence tools themselves to proactively detect competition infringements, as the Korean Competition Authority does.

Enhancing institutional capacity also requires equipping authorities with new powers. The most critical power in this context is the power of algorithmic auditing. This power allows competition authorities to examine the functioning of algorithms used in certain markets, the data they use, and their decision-making parameters, even in the absence of a suspicion of an infringement. This is an indispensable tool for overcoming the black box problem and for supervising whether the principle of “compliance by design” is being implemented. This auditing power must be supported by regulations that impose an obligation on undertakings to explain how their algorithms work and to provide transparency. Within the scope of the market investigation tool in the United Kingdom, the CMA has the authority to request detailed information from undertakings about the design and operation of their algorithms and to conduct tests71. Granting a similar power to other competition authorities will elevate them from a reactive investigator role to a proactive supervisor role that oversees the health of digital markets.

Finally, the global nature of digital markets makes international cooperation inevitable. No single competition authority can supervise the complex algorithms of global technology giants alone. Therefore, strengthening cooperation mechanisms through platforms such as the Organisation for Economic Co-operation and Development (“OECD), the International Competition Network (“ICN”), and the European Competition Network (“ECN”), which facilitate the sharing of data, code, and methodology among authorities, are of vital importance for the effectiveness of future competition policy72.

D. Specific Policy and Legislative Proposals for Turkey

In light of the global trends and solution proposals discussed above, it is necessary to develop a specific policy and legislative framework for Turkey’s struggle against algorithmic collaboration. It would be more appropriate for this framework to focus on strengthening the existing structure and a gradual adaptation process rather than on sudden and radical legislative changes. Indeed, it can be argued that Act No. 4054, in its current form, has the potential to cover many of the algorithmic collaboration scenarios, especially with tools like the presumption of concerted practice, and therefore there is no urgent need for a legislative amendment.

In this direction, it is considered that the first and most important step could be the publication of a guideline by the Competition Authority regarding algorithmic collaboration. This guideline should provide legal certainty for market players by clarifying how the concepts of agreement and concerted practice will be interpreted in algorithmic scenarios, what types of algorithmic behaviors will be considered as plus factors, and the minimum standards of care expected from undertakings within the framework of the compliance by design principle73.

Secondly, it is of critical importance for the Authority to strengthen its institutional capacity in the fields of data science and technology and to establish a specialized unit in this area. The “Data, Technology and Analytics” unit established within the CMA is a concrete step taken in this direction and serves as a model for other authorities. Such interdisciplinary teams will enable authorities not only to investigate suspicious behaviors but also to use artificial intelligence tools themselves to proactively detect competition infringements.

Thirdly, the proactive use of the Authority’s sector inquiry power should be encouraged. Sector inquiries in markets where algorithms are used intensively, such as e-commerce, online travel agencies, and digital advertising, will serve as a radar for understanding market dynamics and for the early detection of potential competition risks74.

Finally, the effective use of leniency and commitment mechanisms should be encouraged. Considering the evidentiary difficulties in detecting algorithmic infringements, leniency programs are a vital tool for obtaining inside information. The commitment mechanism, on the other hand, can be an effective intervention tool for dynamic digital markets by allowing for the implementation of behavioral (e.g., reprogramming the algorithm) or structural solutions that quickly correct the competitive structure of the market, without going through long and complex investigation processes. Indeed, the Authority’s conclusion of its investigations into the coordination risks created by the automated pricing mechanisms of large e-marketplace platforms with commitments that include the removal of the most collusion-prone algorithmic rules, such as “Match the Buybox Price”, is a current and concrete example of how effectively this tool can be used as a proactive market regulation instrument75.

VIII. CONCLUSION

The rise of artificial intelligence and pricing algorithms confronts competition law with the phenomenon of an autonomous coded collusion, where human-centric concepts such as a meeting of the minds and communication, which form its foundation, are insufficient. This new form of collusion, which qualitatively differs from traditional oligopolistic interdependence, creates a deep legal lacuna, especially due to the opaque decision-making processes of black box algorithms, which challenge the limits of the existing legal frameworks in the areas of attribution of liability and the law of evidence.

In the face of this challenge, it is clear that ex post enforcement mechanisms alone will be insufficient. An effective competition policy necessitates a hybrid approach that combines traditional tools, such as the flexible interpretation of existing rules, with ex ante regulatory solutions like compliance by design and proactive market investigation mechanisms. The presumption of concerted practice in Turkish competition law has the potential to strengthen the ex post leg of this hybrid approach, while the Authority’s recent proactive use of the commitment mechanism, especially in digital markets, to swiftly correct the competitive functioning of the market and to eliminate potential competition risks before an investigation is completed and an infringement is found, indicates that this tool is gaining an ex ante intervention dimension beyond its ex post nature. Thus, it is seen that the Authority is adopting the commitment mechanism not only as a solution that reacts to past infringements but also as a flexible and solution-oriented regulatory tool that shapes the future competitive structure of markets in line with the dynamics of the digital age.

Ultimately, the ability of competition law to maintain its effectiveness and legitimacy in the digital age depends on its evolution from a reactive punitive model to an enforcement understanding that has strengthened technological capacity and proactively shapes the fair functioning of markets.

DİPNOT

  1. Ariel Ezrachi/ Maurice E. Stucke, Yapay Zeka ve Danışıklı Anlaşma: Bilgisayarlar Rekabeti Engellediğinde, University of Illinois Law Review, C. 2017 S. 5, 2017, s. 1776-1783; Pelin Teber Karabudak, Algoritmik Stratejiler Yoluyla Rekabete Aykırı Anlaşmalar, Rekabet Kurumu Uzmanlık Tezi, Ankara 2022, s. 1, 17-20.

  2. Avrupa Birliği’nin İşleyişine Dair Antlaşma, (Erişim:22.08.2025), https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:12012E/TXT:en:PDF.

  3. Kanun düzenlemesi için bkz. Rekabetin Korunması Hakkında Kanun metni, (Erişim:22.08.2025), https://www.mevzuat.gov.tr/mevzuat?MevzuatNo=4054&MevzuatTur=1&MevzuatTertip=5.

  4. Renato Nazzini/ James Henderson, Algoritmik ‘Danışıklı Anlaşmalar’ Konusundaki Mevcut Bilgi Boşluğunun Giderilmesi ve Hesaplamalı Antitröstün Rolü, Stanford Computational Antitrust, C. 4, 2024, s. 9, 25-27.

  5. Ezrachi/ Stucke, Sanal Rekabet - Algoritma Güdümlü Ekonominin Vaatleri ve Tehlikeleri, Harvard University Press, Cambridge, 2016, s. 59-62; Ai Deng, Algoritmik Zımni Anlaşmalar Hakkında Ne Biliyoruz?, Antitrust, C. 33, S. 1, 2018, s. 89-90.

  6. Francisco Beneke/ Mark-Oliver Mackenrodt, Algoritmik zımni anlaşmalar için çözüm yolları, Journal of Antitrust Enforcement, C. 9, S. 1, 2021, s. 159-160; Cihan Doğan, Algoritma ve Rekabet Hukuku: 4. Madde İhlallerinin Dijital Görünümleri, Galatasaray Üniversitesi Hukuku Fakültesi Dergisi, C. 16, S. 2, 2017, s. 399.

  7. Karabudak, s. 5-7.

  8. Doğan, Fiyatlama Algoritmaları: Rekabet Hukuku ve İktisadı Perspektifinden Yaklaşım, Uygulamalı Rekabet Hukuku Seminerleri 2018, İstanbul 2019, s. 297.

  9. Frédéric Marty/ Thierry Warin, Algoritmik danışıklı anlaşmaları deşifre etmek: haydut algoritmalarından çıkarımlar ve antitröst uygulamaları için sonuçlar, Journal of Economy and Technology, C. 3, 2025, s. 38; Emilio Calvano/ Giacomo Calzolari/ Vincenzo Denicolò/ Sergio Pastorello, Yapay Zeka, Algoritmik Fiyatlandırma ve Danışıklı Anlaşma, American Economic Review, C. 110, S. 10, 2020, s. 3267-3268.

  10. Ezrachi/ Stucke, Sanal Rekabet – Algoritma Güdümlü Ekonominin Vaatleri ve Tehlikeleri, Harvard University Press, Cambridge, 2016, s. 76-77; Nazzini/ Henderson, Algoritmik ‘Danışıklı Anlaşmalar’ Konusundaki Mevcut Bilgi Boşluğunun Giderilmesi ve Hesaplamalı Antitröstün Rolü, Stanford Computational Antitrust, C. 4, 2024, s. 9.

  11. Karabudak, s. 8-10.

  12. Doğan, Algoritma ve Rekabet Hukuku: 4. Madde İhlallerinin Dijital Görünümleri, Galatasaray Üniversitesi Hukuk Fakültesi Dergisi, C. 16, S. 2, 2017, s. 407; Ezrachi/ Stucke, Yapay Zeka ve Danışıklı Anlaşma: Bilgisayarlar Rekabeti Engellediğinde, University of Illinois Law Review, C. 2017 S. 5, 2017, s. 1780-1781.

  13. Ezrachi/ Stucke, Sanal Rekabet - Algoritma Güdümlü Ekonominin Vaatleri ve Tehlikeleri, Harvard University Press, Cambridge, 2016, s. 60-62; Karabudak, s. 8.

  14. Doğan, Algoritma ve Rekabet Hukuku: 4. Madde İhlallerinin Dijital Görünümleri, Galatasaray Üniversitesi Hukuk Fakültesi Dergisi, C. 16, S. 2, 2017, s. 397, 421; Deng, s. 89.

  15. Ezrachi/ Stucke, Yapay Zeka ve Danışıklı Anlaşma: Bilgisayarlar Rekabeti Engellediğinde, University of Illinois Law Review, C. 2017 S. 5, 2017, s. 1791-1793; Karabudak, s. 19-20.

  16. Ezrachi/ Stucke, Yapay Zeka ve Danışıklı Anlaşma: Bilgisayarlar Rekabeti Engellediğinde, University of Illinois Law Review, C. 2017 S. 5, 2017, s. 1795-1796.

  17. Luca Calzolari, Algoritmik ve Zımni Anlaşmaları Karşılaştırmanın Yanıltıcı Sonuçları: ABİDA m. 101 Kapsamında Algoritmik Uyumlu Eylemlerle Mücadele, European Papers, C. 6, S. 2, 2021, s. 1203-1204; Giacalone, Algoritmik Danışıklı Anlaşma: Kurumsal Sorumluluk ve ABİDA m. 101’in Uygulanması, European Papers, C. 9, S. 3, 2024, s. 1053-1054.

  18. Karabudak, s. 51; Doğan, Algoritma ve Rekabet Hukuku: 4. Madde İhlallerinin Dijital Görünümleri, Galatasaray Üniversitesi Hukuk Fakültesi Dergisi, C. 16, S. 2, 2017, s. 425-426.

  19. Ezrachi/ Stucke, Yapay Zeka ve Danışıklı Anlaşma: Bilgisayarlar Rekabeti Engellediğinde, University of Illinois Law Review, C. 2017 S. 5, 2017, s. 1784.

  20. Karabudak, s. 23-24; Doğan, Algoritma ve Rekabet Hukuku: 4. Madde İhlallerinin Dijital Görünümleri, Galatasaray Üniversitesi Hukuk Fakültesi Dergisi, C. 16, S. 2, 2017, s. 408-411.

  21. Karabudak, s. 27-30; Ezrachi/ Stucke, Sanal Rekabet - Algoritma Güdümlü Ekonominin Vaatleri ve Tehlikeleri, Harvard University Press, Cambridge, 2016, s. 46-55.

  22. Giacalone, Algoritmik Danışıklı Anlaşma: Kurumsal Sorumluluk ve ABİDA m. 101’in Uygulanması, European Papers, C. 9, S. 3, 2024, s. 1057.

  23. Calzolari, Algoritmik ve Zımni Anlaşmaları Karşılaştırmanın Yanıltıcı Sonuçları: ABİDA m. 101 Kapsamında Algoritmik Uyumlu Eylemlerle Mücadele, European Papers, C. 6, S. 2, 2021, s. 1207; Ezrachi/ Stucke, Yapay Zeka ve Danışıklı Anlaşma: Bilgisayarlar Rekabeti Engellediğinde, University of Illinois Law Review, C. 2017 S. 5, 2017, s. 1789-1791.

  24. Doğan, Algoritma ve Rekabet Hukuku: 4. Madde İhlallerinin Dijital Görünümleri, Galatasaray Üniversitesi Hukuk Fakültesi Dergisi, C. 16, S. 2, 2017, s. 424.

  25. Deng, Algoritmik Zımni Anlaşmalar Hakkında Ne Biliyoruz?, Antitrust, C. 33, S. 1, 2018, s. 90-91; Liu, Algoritmik Zımni Anlaşma, New Zealand Business Law Quarterly, C. 25, 2019, s. 209-210.

  26. Karabudak, s. 23-24; Nazzini/ Henderson, Algoritmik ‘Danışıklı Anlaşmalar’ Konusundaki Mevcut Bilgi Boşluğunun Giderilmesi ve Hesaplamalı Antitröstün Rolü, Stanford Computational Antitrust, C. 4, 2024, s. 10.

  27. Giacalone, Algoritmik Danışıklı Anlaşma: Kurumsal Sorumluluk ve ABİDA m. 101’in Uygulanması, European Papers, C. 9, S. 3, 2024, s. 1057.

  28. Karabudak, s. 48-49.

  29. Doğan, Fiyatlama Algoritmaları: Rekabet Hukuku ve İktisadı Perspektifinden Yaklaşım, Uygulamalı Rekabet Hukuku Seminerleri 2018, 2018, s. 308-309.

  30. Doğan, Fiyatlama Algoritmaları: Rekabet Hukuku ve İktisadı Perspektifinden Yaklaşım, Uygulamalı Rekabet Hukuku Seminerleri 2018, 2018, s. 312-313.

  31. Karabudak, s. 29, 43.

  32. Karabudak, s. 10

  33. Ezrachi/ Stucke, Yapay Zeka ve Danışıklı Anlaşma: Bilgisayarlar Rekabeti Engellediğinde, University of Illinois Law Review, C. 2017 S. 5, 2017, s. 1795-1796

  34. Karabudak, s. 18; Calzolari, Algoritmik ve Zımni Anlaşmaları Karşılaştırmanın Yanıltıcı Sonuçları: ABİDA m. 101 Kapsamında Algoritmik Uyumlu Eylemlerle Mücadele, European Papers, C. 6, S. 2, 2021, s. 1208-1209.

  35. Giacalone, Algoritmik Danışıklı Anlaşma: Kurumsal Sorumluluk ve ABİDA m. 101’in Uygulanması, European Papers, C. 9, S. 3, 2024, s. 1049 (Suiker Unie kararına atfen).

  36. Doğan, Algoritma ve Rekabet Hukuku: 4. Madde İhlallerinin Dijital Görünümleri, Galatasaray Üniversitesi Hukuk Fakültesi Dergisi, C. 16, S. 2, 2017, s. 423.

  37. Giacalone, Algoritmik Danışıklı Anlaşma: Kurumsal Sorumluluk ve ABİDA m. 101’in Uygulanması, European Papers, C. 9, S. 3, 2024, s. 1054.

  38. Doğan, Algoritma ve Rekabet Hukuku: 4. Madde İhlallerinin Dijital Görünümleri, Galatasaray Üniversitesi Hukuk Fakültesi Dergisi, C. 16, S. 2, 2017, s. 426.

  39. Karabudak, s. 51.

  40. Calzolari, Algoritmik ve Zımni Anlaşmaları Karşılaştırmanın Yanıltıcı Sonuçları: ABİDA m. 101 Kapsamında Algoritmik Uyumlu Eylemlerle Mücadele, European Papers, C. 6, S. 2, 2021, s. 1207.

  41. Karabudak, s. 41-43.

  42. Nazzini/ Henderson, Algoritmik ‘Danışıklı Anlaşmalar’ Konusundaki Mevcut Bilgi Boşluğunun Giderilmesi ve Hesaplamalı Antitröstün Rolü, Stanford Computational Antitrust, C. 4, 2024, s. 25-26; Ezrachi/ Stucke, Sanal Rekabet - Algoritma Güdümlü Ekonominin Vaatleri ve Tehlikeleri, Harvard University Press, Cambridge, 2016, s. 76-77.

  43. Nazzini/ Henderson, Algoritmik ‘Danışıklı Anlaşmalar’ Konusundaki Mevcut Bilgi Boşluğunun Giderilmesi ve Hesaplamalı Antitröstün Rolü, Stanford Computational Antitrust, C. 4, 2024, s. 20-21; Karabudak, s. 54-55, 62.

  44. Deng, Algoritmik Zımni Anlaşmalar Hakkında Ne Biliyoruz?, Antitrust, C. 33, S. 1, 2018, s. 93-94.

  45. Giacalone, Algoritmik Danışıklı Anlaşma: Kurumsal Sorumluluk ve ABİDA m. 101’in Uygulanması, European Papers, C. 9, S. 3, 2024, s. 1049, 1059.

  46. Karabudak, s. 38-40.

  47. Ezrachi/ Stucke, Yapay Zeka ve Danışıklı Anlaşma: Bilgisayarlar Rekabeti Engellediğinde, University of Illinois Law Review, C. 2017 S. 5, 2017, s. 1795-1796.

  48. Hawkes, Algoritmik Zımni Anlaşmalarla Mücadele İçin Bir Piyasa Araştırma Aracı: (Yakın) Gelecek İçin Bir Yaklaşım, European Legal Studies Research Papers in Law, No. 3, 2021, s. 21.

  49. Hawkes, Algoritmik Zımni Anlaşmalarla Mücadele İçin Bir Piyasa Araştırma Aracı, European Legal Studies Research Papers in Law, S. 3, 2021, s. 18-19, 22.

  50. George Slover, Yapay Zeka Rekabete Aykırı Danışıklı Anlaşmalar İçin Yeni Bir Geçit mi?, (Erişim:22.08.2025), https://cdt.org/insights/is-artificial-intelligence-a-new-gateway-to-anticompetitive-collusion/.

  51. Liu, Algoritmik Zımni Anlaşma, New Zealand Business Law Quarterly, C. 25, 2019, s. 205; Deng, Algoritmik Zımni Anlaşmalar Hakkında Ne Biliyoruz?, Antitrust, C. 33, S. 1, 2018, s. 91.

  52. Karabudak, s. 23

  53. Doğan, Algoritma ve Rekabet Hukuku: 4. Madde İhlallerinin Dijital Görünümleri, Galatasaray Üniversitesi Hukuk Fakültesi Dergisi, C. 16, S. 2, 2017, s. 421-422; Ezrachi/ Stucke, Yapay Zeka ve Danışıklı Anlaşma: Bilgisayarlar Rekabeti Engellediğinde, University of Illinois Law Review, C. 2017 S. 5, 2017, s. 1786.

  54. Deng, Algoritmik Zımni Anlaşmalar Hakkında Ne Biliyoruz?, Antitrust, C. 33, S. 1, 2018, s. 89 (DOJ yetkilisinin açıklamasına atfen).

  55. Hawkes, Algoritmik Zımni Anlaşmalarla Mücadele İçin Bir Piyasa Araştırma Aracı: (Yakın) Gelecek İçin Bir Yaklaşım, European Legal Studies Research Papers in Law, S. 3, 2021, s. 17-18.

  56. Nazzini/ Henderson, Algoritmik ‘Danışıklı Anlaşmalar’ Konusundaki Mevcut Bilgi Boşluğunun Giderilmesi ve Hesaplamalı Antitröstün Rolü, Stanford Computational Antitrust, C. 4, 2024, s. 20.

  57. Karabudak, s. 62.

  58. Doğan, Fiyatlama Algoritmaları: Rekabet Hukuku ve İktisadı Perspektifinden Yaklaşım, Uygulamalı Rekabet Hukuku Seminerleri 2018, 2018, s. 312-313.

  59. Rekabet Kurulu’nun 03.10.2024 tarihli ve 24-40/950-409 sayılı Trendyol kararı, (Erişim:22.08.2025), https://www.rekabet.gov.tr/Karar?kararId=0f206169-3236-4cb8-a9eb-8a0fb70f58b7.

  60. Karabudak, s. 10.

  61. Calzolari, Algoritmik ve Zımni Anlaşmaları Karşılaştırmanın Yanıltıcı Sonuçları: ABİDA m. 101 Kapsamında Algoritmik Uyumlu Eylemlerle Mücadele, European Papers, C. 6, S. 2, 2021, s. 1207.

  62. Doğan, Fiyatlama Algoritmaları: Rekabet Hukuku ve İktisadı Perspektifinden Yaklaşım, Uygulamalı Rekabet Hukuku Seminerleri 2018, 2018, s. 312-313; Karabudak, s. 19.

  63. Deng, Algoritmik Zımni Anlaşmalar Hakkında Ne Biliyoruz?, Antitrust, C. 33, S. 1, 2018, s. 93.

  64. Slover, Yapay Zeka Rekabete Aykırı Danışıklı Anlaşmalar İçin Yeni Bir Geçit mi?, (Erişim:22.08.2025), https://cdt.org/insights/is-artificial-intelligence-a-new-gateway-to-anticompetitive-collusion/; Beneke/ Mackenrodt, Algoritmik zımni anlaşmalar için çözüm yolları, Journal of Antitrust Enforcement, C. 9, S. 1, 2021, s. 159.

  65. Slover, Yapay Zeka Rekabete Aykırı Danışıklı Anlaşmalar İçin Yeni Bir Geçit mi?, (Erişim:22.08.2025), https://cdt.org/insights/is-artificial-intelligence-a-new-gateway-to-anticompetitive-collusion/; Beneke/ Mackenrodt, Algoritmik zımni anlaşmalar için çözüm yolları, Journal of Antitrust Enforcement, C. 9, S. 1, 2021, s. 169-170.

  66. Giacalone, Algoritmik Danışıklı Anlaşma: Kurumsal Sorumluluk ve ABİDA m. 101’in Uygulanması, European Papers, C. 9, S. 3, 2024, s. 1060.

  67. Caforio, Algoritmik zımni anlaşma: düzenleyici bir yaklaşım, The Competition Law Review, C. 15, S. 1, 2023, s. 25-27; Karabudak, s. 61.

  68. Giacalone, Algoritmik Danışıklı Anlaşma, European Papers, C. 9, S. 3, 2024, s. 1061; Deng, Algoritmik Zımni Anlaşmalar Hakkında Ne Biliyoruz?, Antitrust, C. 33, S. 1, 2018, s. 93.

  69. Nazzini/ Henderson, Algoritmik ‘Danışıklı Anlaşmalar’ Konusundaki Mevcut Bilgi Boşluğunun Giderilmesi ve Hesaplamalı Antitröstün Rolü, Stanford Computational Antitrust, C. 4, 2024, s. 29-30.

  70. Nazzini/ Henderson, Algoritmik ‘Danışıklı Anlaşmalar’ Konusundaki Mevcut Bilgi Boşluğunun Giderilmesi ve Hesaplamalı Antitröstün Rolü, Stanford Computational Antitrust, C. 4, 2024, s. 13, 20-21; Karabudak, s. 62.

  71. Hawkes, Algoritmik Zımni Anlaşmalarla Mücadele İçin Bir Piyasa Araştırma Aracı: (Yakın) Gelecek İçin Bir Yaklaşım, European Legal Studies Research Papers in Law, S. 3, 2021, s. 17-18; Nazzini/ Henderson, Algoritmik ‘Danışıklı Anlaşmalar’ Konusundaki Mevcut Bilgi Boşluğunun Giderilmesi ve Hesaplamalı Antitröstün Rolü, Stanford Computational Antitrust, C. 4, 2024, s. 27-28.

  72. Nazzini/ Henderson, Algoritmik ‘Danışıklı Anlaşmalar’ Konusundaki Mevcut Bilgi Boşluğunun Giderilmesi ve Hesaplamalı Antitröstün Rolü, Stanford Computational Antitrust, C. 4, 2024, s. 20, 22-23.

  73. Karabudak, s. 63-64; Caforio, Algoritmik zımni anlaşma: düzenleyici bir yaklaşım, The Competition Law Review, C. 15, S. 1, 2023, s. 25-27.

  74. Karabudak, s. 57-58; Hawkes, Algoritmik Zımni Anlaşmalarla Mücadele İçin Bir Piyasa Araştırma Aracı: (Yakın) Gelecek İçin Bir Yaklaşım, European Legal Studies Research Papers in Law, S. 3, 2021, s. 16.

  75. Rekabet Kurulu’nun 03.10.2024 tarihli ve 24-40/950-409 sayılı Trendyol kararı, (Erişim:22.08.2025), https://www.rekabet.gov.tr/Karar?kararId=0f206169-3236-4cb8-a9eb-8a0fb70f58b7.

More Insights

Articletter / GSI Brief

GSI Brief & Legal Brief

GSI Brief 204

Gsi Brief 204

Brief
Read more
GSI Brief 205

Gsi Brief 205

Brief
Read more
GSI Brief 206

Gsi Brief 206

Brief
Read more
GSI Brief 189

Gsi Brief 189

Brief
Read more

Articletter - Winter Issue

LEGAL ACCOUNTABILITY OF SOCIAL MEDIA PLATFORMS IN CASES OF USER DATA LEAKAGE

Legal Accountability Of Social Media Platforms In Cases Of User Data Leakage

2026
Read more
THE ROLE OF LEGAL DUE DILIGENCE IN PROJECT FINANCE

The Role Of Legal Due Diligence In Project Finance

2026
Read more
PROJECT REAL ESTATE INVESTMENT FUNDS

Project Real Estate Investment Funds

2026
Read more
DETERMINATION OF THE EXPERT WITNESS IN ARBITRATION: THE LEGAL NATURE OF THE PARTY AGREEMENT AND THE JURISDICTION OF THE ARBITRAL TRIBUNAL

Determination Of The Expert Witness In Arbitration: The Legal Nature Of The Party Agreement And The Jurisdiction Of The Arbitral Tribunal

2026
Read more