Abstract
The article comprehensively addresses the regulations governing AI-supported personal data processing processes in employment relationships in Türkiye and within the international framework, and it examines the ethical and legal dimensions of these practices.
I. INTRODUCTION
Today, the integration of artificial intelligence technologies into working life is significantly transforming the traditional legal understanding regarding the relationship between employer and employee. Particularly in areas such as recruitment, performance evaluation, task assignments, disciplinary procedures and workplace supervision, AI-supported systems offer employers faster, data-driven and cost-effective decision-making opportunities. However, the proliferation of these technologies brings with it various legal, ethical and fundamental rights issues in processes such as the collection, processing and analysis of employees’ personal data. In this context, questions about how artificial intelligence technologies are used in personal data processing, within which legal boundaries these practices can be considered legitimate and how employees’ fundamental rights and freedoms can be protected are becoming increasingly important.
In this study, AI-supported data processing applications in employment relationships are addressed within a legal and normative framework. Specifically for Türkiye, within the scope of the Personal Data Protection Act No. 6698 (“PDPA”), the Turkish Code of Obligations No. 6098 (“TCO”) and the Labour Act No. 4857 (“Labour Act”), the concepts of data controller and data processor are examined and the balance between the employer’s powers of supervision and control and the employees’ right to privacy is discussed. Within the framework of European Union legislation, the limitations and obligations imposed on automatic decision-making systems in the employment relationship are evaluated in line with the General Data Protection Regulation (“GDPR”) and the Artificial Intelligence Act (“AI Act”). In addition, despite the lack of a federal framework in the United States, it is explained how state-based regulations guide employers’ responsibilities. The article discusses in detail principles such as transparency, accountability, prevention of discrimination, data minimisation and ethical responsibility in the context of AI applications. Furthermore, in the light of national and international court decisions and the guidelines issued by regulatory authorities, the boundaries of automatic decision-making mechanisms and the rights of employees to object are analysed.
II. THE CONCEPT OF PERSONAL DATA
The protection of personal data has today become a legal and social issue of great importance in the context of the fundamental rights and freedoms of the individual. In this context, the PDPA establishes the rules for the processing of personal data in Türkiye with the aim of protecting individuals’ privacy and fundamental rights. Pursuant to Article 1 of the PDPA1 respect for the privacy of individuals is adopted as a fundamental principle in the processing of personal data; and the obligations of the natural and legal persons who process data, together with the procedures and principles they must follow, are clearly laid down.
Under the PDPA, personal data are defined as any information relating to an identified or identifiable natural person; elements such as name and surname, contact information, health data, internet browsing history and personal correspondence fall within this scope. The concept of the processing of personal data covers a wide range of operations, such as obtaining, recording, storing, preserving, modifying, disclosing, transferring, classifying or preventing the use of data by automatic means or manually, provided that the data are part of a data recording system. The performance of these activities outside the legal framework envisaged by the PDPA is considered an unlawful data-processing act and gives rise to both administrative sanctions and criminal liability.
TCO also includes special provisions on the protection of personal data in the context of employment relationships. According to Article 419 of the TCO2, the employer may process an employee’s personal data only for the purpose of assessing the employee’s suitability for employment or for the performance of the employment contract, and only to the extent limited to these purposes. This provision restricts the employer’s personal data processing authority within the principles of connection with purpose and proportionality. Therefore, the collection or processing of data relating to the employee’s private life which have no relevance to the work constitutes a violation of this article and does not comply with the law. Indeed, the Court of Cassation emphasises that an employer’s arbitrary intervention in the private life of an employee — for example, conducting unauthorised inspections of personal e-mail accounts or private messages — clearly amounts to a violation of the employee’s personal rights.
The Labour Act, although not directly, contains provisions aimed at protecting the confidentiality of employees’ personal data through certain articles. In particular, Article 75 of the Labour Act3, imposes an obligation on the employer to create a personal file for each employee. It is prohibited to share the information contained in this file with third parties in a manner contrary to the legislation and the rule of good faith. This regulation aims at protecting and safeguarding the personal data of employees. On the other hand, Article 135 of the Turkish Penal Code (“TPC”) No. 52374 and the subsequent provisions make it an offence to obtain, share or disclose personal data unlawfully; these provisions are also applicable within the scope of employment relationships.
III. THE CONCEPT OF PERSONAL DATA IN THE EMPLOYMENT RELATIONSHIP
Within the scope of the PDPA, a data controller is defined as a natural or legal person who determines the purposes and means of processing personal data and is also responsible for the establishment and management of the data recording system. A data processor, by contrast, means a natural or legal person who processes personal data on the basis of the authority granted by the data controller and on behalf of the data controller. The rationale of the PDPA emphasises that the data processor acts only in accordance with the instructions of the data controller. In this context, it is generally accepted in assessments specific to employment relationships that the employer should be regarded as the data controller and the other employees at the workplace should be regarded as data processors.
At the same time, employees whose personal data are processed and candidates who apply for employment are considered data subjects within the meaning of the PDPA. For these employees and candidates, not only basic identity information but also special categories of personal data such as criminal record information, professional data, psychological assessment results and information on union or political party membership held in digital environments or in personal files benefit from the protection provided by the PDPA. The PDPA imposes various obligations on employers, as data controllers, not only in relation to the processing of personal data but also in relation to ensuring data security. According to the first paragraph of Article 12 of the PDPA5, data controllers are obliged to take appropriate technical and organisational measures to prevent the unlawful processing of personal data, to prevent unauthorised access and to ensure the safekeeping of the data.
In this framework, it has become a legal obligation for employers to take all necessary measures to protect the personal data of employees. The second paragraph of the same article states that where personal data are processed by other natural or legal persons on behalf of the data controller, the data controller and the data processors have joint responsibility for taking these measures. Thus, it is explicitly laid down that obligations concerning data security will apply not only to data controllers but also to persons and entities in the position of data processors.
The PDPA also makes it mandatory for employers to carry out internal audits or to have external audits carried out to ensure that data-processing activities carried out in the workplace comply with the PDPA. In addition, both employers acting as data controllers and individuals in the position of data processors have a duty of confidentiality, and this obligation continues after the termination of the employment relationship. Lastly, in the event that the personal data processed are obtained by third parties by unlawful means, the employer, as data controller, is legally obliged to notify the Personal Data Protection Board and the data subjects immediately.
The PDPA grants employees whose personal data are processed various rights which they may exercise against their employer. These rights are regulated under Article 11 of the PDPA6 under the heading “rights of the data subject”. Under this provision, employees first have the right to apply to their employer to learn whether personal data relating to them are being processed. They also have the right to request information about the processed data and to learn the purpose of the processing, whether the data are used in accordance with these purposes and the identity of third parties to whom the data are transferred. These rights are also regarded as part of the employer’s duty to provide information.
In addition, employees have the right to request the correction of any incompleteness or inaccuracy they identify in their personal data or to request the deletion and destruction of the data. These requests must also be notified to third parties to whom the personal data have been transferred. Furthermore, if employees suffer damage as a result of the processing of their personal data in breach of the procedures and principles specified in the PDPA, they have the right to claim compensation for the damage.
In addition to the right to request the correction of material errors in personal data, the PDPA also provides employees with the opportunity to object to negative consequences that may arise as a result of the processing of these data through automated systems. Moreover, if the reasons for processing personal data disappear, employees may request their employer to delete or destroy their data. There are views that this right to deletion and destruction is linked to the concept of the “right to be forgotten”.
IV. PROCESSING PERSONAL DATA WITH ARTIFICIAL INTELLIGENCE IN THE EMPLOYMENT RELATIONSHIP
Artificial intelligence technologies are systems based on algorithms and software equipped with learning capabilities, pattern recognition and decision-making abilities similar to those of humans. These systems have the potential to analyse large amounts of data, automate certain tasks and make processes more efficient. From the point of view of the employer–employee relationship, it is seen that AI applications are used in many areas starting from recruitment and continuing throughout the term of the employment contract. Indeed, AI-based programmes used at the recruitment stage analyse candidates’ résumés, filter them according to pre-defined criteria and rank candidates according to their success scores. These applications enable companies, especially those that receive a large number of applications, to conduct their human resources processes more quickly and systematically.
In recruitment processes, practices involving the processing of visual and auditory data obtained via video using AI technologies are becoming increasingly widespread among employers. AI-supported online systems can analyse personality, knowledge and aptitude tests administered to candidates; they also have the capacity to process various parameters simultaneously during the interview — including the candidate’s answers, facial expressions, breathing rate and tone of voice — and to make comprehensive evaluations.
After the employment relationship has been established, AI systems are effectively used in many processes such as measuring employees’ performance, monitoring their productivity and ensuring general workplace oversight. For example, software that collects behavioural information such as employees’ computer-usage data, e-mail traffic, task completion times and break times analyses these data through AI algorithms and provides automatic reports to employers. In addition, AI-supported smart camera systems used in workplaces for security reasons provide employers with continuous surveillance by offering advanced functions such as facial recognition, location determination and the detection of unusual movements.
AI systems play an important role in tracking employees’ performance. In this context, monitoring and surveillance practices carried out for the purpose of measuring performance provide employers with a legitimate basis for processing employees’ personal data. Advances in technologies such as the Internet of Things and GPS and their integration with AI have made it possible to measure various parameters that had not previously been taken into account. Accordingly, employees’ habits regarding the use of tools provided by the employer, their working hours and the outputs they produce while performing their duties can be treated as personal data and processed. AI systems provide employers with continuous monitoring of production processes and employees throughout working time and allow instantaneous intervention. Therefore, in line with the duty to inform mentioned above, it is of great importance that employees are comprehensively informed about which tools (for example, robotic equipment or AI-supported imaging devices) are used to collect their data and how these data are processed. Moreover, the increasing use of AI systems and the developments in wearable technologies provide employers with new opportunities in the processes of monitoring employees and collecting data.
Nevertheless, these technologies may, in some situations, lead to consequences that amount to automatic decision-making. For example, the non-employment of a candidate who fails to obtain a sufficient score as a result of algorithmic assessment, or the subjecting of a employee who is evaluated as having low performance to disciplinary action, may arise directly as a result of AI systems. Therefore, it is of great importance that AI-based applications are designed in a way that is compatible with the fundamental principles of labour law — transparency, accountability, the prohibition of discrimination and the protection of private life. Otherwise, serious legal and ethical problems may arise, such as violations of employees’ personal rights, unauthorized surveillance of their private lives or treatment contrary to the principle of equality.
V. PROCESSING PERSONAL DATA WITH ARTIFICIAL INTELLIGENCE IN THE EMPLOYMENT RELATIONSHIP IN TURKIYE
In Türkiye, although legal and technical debates regarding the use of AI technologies in employment relationships are increasing, there are still no concrete examples of practice in this area in the literature. However, the Constitutional Court adopts an approach that takes into account the fundamental rights of employees in the supervision and data-processing practices carried out by employers using technological tools. An important example in this regard is the decision of the Constitutional Court dated 12 January 2021 in the case of “Altınörgü”7. In that case, the applicant, who was employed at a bank, alleged that his right to the privacy of his private life and his freedom of communication had been violated after the employer terminated the employment contract based on the information obtained by examining the correspondence in the applicant’s corporate e-mail account. The Constitutional Court accepted that the employer has the power to monitor communication tools such as the computer and corporate e-mail account allocated to the employee within reasonable limits, but underlined that this power is subject to certain restrictions. In the decision, it was stated that, in the face of the monitoring and control possibilities that have emerged with technological advances, a fair balance must be struck between the employer’s legitimate interests and the employee’s fundamental rights, and certain assessment criteria were set out in this context. The Court accepted that the employer has the authority to monitor the digital communication tools allocated to the employee within the scope of management rights but emphasised that this power may be used only for legitimate purposes. Monitoring carried out for purposes such as ensuring workplace order and safety is considered legitimate; however, these practices must be directly connected with the conduct of the work and must not harm the essence of the employee’s fundamental rights and freedoms. The decision clearly stated that the employer does not have an unlimited and absolute surveillance power over communication tools. Furthermore, by virtue of the principle of transparency, it is compulsory for the employer to inform the employees in advance that monitoring will be carried out. This information must include matters such as which data will be processed, for what purposes, the scope, duration and method of monitoring and with whom the collected data may be shared and must enable the employee to know in advance within what limits he or she can use the communication tools. Thus, the employee will have the possibility to protect his or her private life and to take the necessary measures. In addition, in accordance with the principles of proportionality and necessity, the employer’s interference with the employee’s fundamental rights through monitoring must be proportionate to the objective sought, and if it is possible to achieve the same objective with less intrusive measures, more intrusive measures should be avoided. Otherwise, the actions taken will be considered unlawful.
The Constitutional Court, in the decision in question, evaluated the interference with the employee’s rights to the privacy of private life under Article 20 of the Constitution No. 2709 (“Constitution”)8 and to freedom of communication regulated in Article 229. In the specific case, it ruled that the applicant’s rights had been violated due to the employer’s failure to adequately inform the employee in advance and the implementation of a comprehensive monitoring system. This decision constitutes an important precedent emphasising that employers must comply with the principles of transparency and proportionality in all technological monitoring and data-processing activities, including artificial intelligence.
Consequently, the Turkish legal system accepts that practices such as personnel selection, performance monitoring and supervision carried out using artificial intelligence are subject to the general principles of data protection. It is mandatory that AI-supported data processing activities be carried out in accordance with the law. In this context, during the development and implementation of AI systems, no violation of the constitutionally protected fundamental rights and freedoms of employees whose personal data are processed should occur. Moreover, when employees’ personal data are processed by artificial intelligence, the conditions for the processing of personal data and special categories of personal data laid down in the PDPA must be fully satisfied. In the “Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence” published by the Personal Data Protection Authority, the importance of basic principles such as data minimisation, anonymisation and privacy impact assessment in AI applications is emphasised, and it is recommended that data-protection compliance programmes should be established from the outset. In this context, Turkish law, in general, aims to ensure that AI-supported personnel data processing practices are used in accordance with legal boundaries and supervisory principles, without completely excluding the opportunities that these technologies offer.
VI. PROCESSING PERSONAL DATA WITH ARTIFICIAL INTELLIGENCE IN THE EMPLOYMENT RELATIONSHIP IN THE EUROPEAN UNION AND THE UNITED STATES
At the level of the European Union, the processing of personal data in the employment relationship using artificial intelligence has been the subject of extensive discussions and regulations both from the point of view of data protection law and from the point of view of employees’ rights. The GDPR, the EU’s fundamental data protection regulation, attaches particular importance to the processing of employees’ personal data. The GDPR has authorised member states to adopt more detailed rules concerning data processing in the employment relationship; countries such as Germany have used this authority to introduce detailed provisions in their national legislation regarding the processing of employees’ data. Although the GDPR does not contain direct regulations specific to the employee–employer relationship, it sets out the basic principles that employers must follow when processing employee data: lawful and transparent processing, purpose limitation, data minimisation, accuracy, storage limitation and ensuring data integrity and confidentiality.
The GDPR recognizes the right of the data subject not to be subject to decisions based solely on automated processing that significantly affect them. Accordingly, a person should not be subject to decisions that have serious legal or similar effects, such as being evaluated and rejected or penalized solely by automated systems (e.g., artificial intelligence). Decisions on recruitment or promotion taken entirely by artificial intelligence may fall within the scope of this provision. An employer may apply such an automated decision only if the explicit consent of the data subject is obtained, if it is necessary for the conclusion or performance of the contract or if it is expressly provided for by law. Furthermore, even in these cases, the data subject must be granted the right to object, to request human intervention and to express his or her point of view. For example, a candidate in Europe who is not invited to an interview solely on the grounds that he or she is considered “unsuitable” as a result of an algorithmic assessment has the right to object to this decision under the GDPR.
Another legal instrument relating to the processing of personal data by artificial intelligence in the European Union is the AI Act. The AI Act is the first comprehensive and binding piece of legislation that classifies and regulates AI systems according to their risk levels. AI systems used in areas that directly affect vital rights and interests, such as recruitment, work management, and employee safety, have been classified as “high risk.” For example, AI systems that make decisions regarding employees’ promotion, assignment or dismissal are included in the high-risk group. The AI Act makes it mandatory for high-risk AI systems to meet certain conditions before being placed on the market; these include obligations such as risk analysis and mitigation, quality management, prohibition of discrimination, human oversight, transparency and accountability. Employers are required to inform employees when using such systems, to monitor the system and to intervene in the event of non-compliance. In addition, the AI Act has completely prohibited certain AI practices that are contrary to human dignity, such as social scoring, and has stated that algorithmic practices that contravene the principle of equality in recruitment processes will be considered unlawful.
The EU’s approach to artificial intelligence is not limited to the protection of personal data; it also includes the objectives of preventing discrimination and ensuring the safeguarding of employees’ rights. Under EU fundamental rights, the protection of personal data is recognised as a fundamental right, and the prohibition of discrimination continues to apply in employment relationships. Accordingly, if an AI system leads, even indirectly, to discrimination based on grounds such as race, gender or religion, this will be considered both unlawful data processing under the GDPR and a violation of the principle of equal treatment under labour law. Supervisory authorities such as the European Data Protection Board and the European Data Protection Supervisor have also issued various opinions and guidelines on the use of AI in the workplace, emphasising that employers must strictly comply with the principles of transparency, data minimisation, purpose limitation and data security when using AI systems.
In the United States, although there is not yet comprehensive regulation at the federal level, some states have regulated this issue through their own laws. In this regard, the Artificial Intelligence Video Interview Act enacted by the State of Illinois can serve as an example. This law imposes certain obligations on employers who plan to use AI analysis in video interviews: the employer must inform the job candidate in writing in advance and explain that AI analysis will be used in the interview; moreover, the basic working principles and evaluation criteria of the AI must be explained to the candidate and the candidate’s consent must be obtained. The law also requires employers to collect the demographic information (for example race and ethnic origin) of candidates who are eliminated on the basis of AI analysis and to report this information annually to the relevant state agency. This practice is a measure aimed at monitoring the potential biased evaluations of algorithms.
Another regulation on this subject is the Automated Employment Decision Tools Law, which was adopted by New York City and entered into force on 5 July 2023, and which prohibits the use of automated decision-making tools (such as resume-screening AI or models that generate job suitability scores) in recruitment and promotion processes unless the specified conditions are met. Accordingly, before using such tools, employers are required to have an “equity audit” conducted by an independent organization at least once a year and to make a summary of the audit results public. In addition, candidates or employees residing in New York must be informed at least ten business days in advance that an automated evaluation tool will be used and on what criteria the evaluation is based. This regulation aims to ensure that AI systems are used in a transparent and accountable manner and to prevent decisions from being made through “black-box” processes. The general approach in the United States to the processing of personal data using artificial intelligence in the employment relationship focuses on preventing discrimination and ensuring transparency in the use of AI.
VII. POTENTIAL PROBLEMS ARISING FROM THE PROCESSING OF PERSONAL DATA WITH ARTIFICIAL INTELLIGENCE IN THE EMPLOYMENT RELATIONSHIP
Employers have the opportunity to collect a wide range of data about employees or job applicants, from behaviours to health information, from biometric data to various consumption habits. There is a risk that these data will be processed by AI-supported systems in a manner that is contrary to the right to the protection of personal data and to the applicable legal regulations. This situation gives rise to various concerns, especially with regard to the protection of personal data. In order to prevent such concerns, it becomes necessary to take legal measures for the protection of personal data and to establish robust protection mechanisms based on data security. AI-based systems can provide employers with data such as the frequency and duration of employees’ conversations with each other during working hours. However, constant monitoring of employees during working hours has the potential to exert psychological pressure on employees, and this may lead to increased stress levels and more serious negative effects at the social level.
AI-based performance measurement processes may produce results that do not fully reflect reality, and there is a possibility that this will have negative effects on the sustainability of the employment relationship. The performance evaluation of employees by AI systems depends on variables and criteria determined by users and developers. This directly affects the reliability and accuracy of the results obtained. When employees’ performance is evaluated by AI technologies and only numerical data are taken as the basis, it is highly likely that results contrary to the principles of justice and fairness will emerge. There is no opportunity to question the accuracy of results obtained on the basis of AI-based data analyses, and if these data are shared with third parties, there is a possibility that employees’ or candidates’ future job applications will be adversely affected. In other words, data analyses that are contrary to the law and the facts may create negative effects on the lives of employees and candidates. If such data-processing activities carried out through AI-supported systems are detected, the legal responsibility of employers should be brought within a certain framework.
VIII. CONCLUSION
AI-supported personal data processing practices in employment relationships have been evaluated in a multidimensional manner within the framework of the PDPA, the TCO and the Labour Act in Türkiye; within the framework of the GDPR and the AI Act in the European Union; and at the state level in the United States by way of example. The analysis shows clearly that, when employers collect and process employees’ personal data and make decisions based on those data using AI technologies, they must act not only in accordance with legal obligations but also in compliance with principles based on ethics and fundamental rights. In particular, compliance with principles such as transparency, data minimisation, purpose limitation, the prohibition of discrimination and human intervention is of great importance both for protecting employees’ privacy and for safeguarding them against automated decision-making processes. Otherwise, AI-supported systems producing incorrect or biased outputs may lead to consequences that could directly harm workers’ careers, psychological integrity, and fundamental rights. In this context, both the guidance documents published by national regulatory bodies and the decisions issued by high courts provide a clear framework of oversight and accountability for employers. In conclusion, without ignoring the opportunities offered by artificial intelligence in working life, it is essential to ensure that these technologies are used in a way that guarantees the fundamental rights and freedoms of employees. Employers have duties to comply with the legislation and to act with a sense of ethical responsibility in personal data-processing activities. In the future, as technological developments in this field accelerate, it will become necessary to make these legal and ethical boundaries even more clear. Therefore, both updating the legislation and increasing the level of awareness of employers are important for the protection of individual rights and for the establishment of a sustainable working order.
FOOTNOTE
6698 sayılı Kişisel Verilerin Korunması Kanunu m. 1 07.04.2016 29677 sayılı Resmî Gazete (RG).
6098 sayılı Türk Borçlar Kanunu m. 419 04.02.2011 27836 sayılı Resmî Gazete (RG).
4857 sayılı İş Kanunu m. 75 10.06.2003 25134 sayılı Resmî Gazete (RG).
5237 sayılı Türk Ceza Kanunu m. 135 12.10.2004 25611 sayılı Resmî Gazete (RG).
6698 sayılı Kişisel Verilerin Korunması Kanunu m. 12 07.04.2016 29677 sayılı Resmî Gazete (RG).
6698 sayılı Kişisel Verilerin Korunması Kanunu m. 11 07.04.2016 29677 sayılı Resmî Gazete (RG).





.webp)


