Legal Analysis of The Use of AI Emotion Detectors in Online Recruitment by Private Companies
DOI:
https://doi.org/10.59261/iclr.v3i1.41Keywords:
AI, Privacy, Recruitment, Algorithmic Discrimination, LawAbstract
This study aims to analyze the legal implications of the use of Artificial Intelligence (AI) emotion detectors in online recruitment processes by private companies, with a focus on protecting prospective employees' privacy rights and the potential for algorithmic discrimination. The research method used is normative juridical with the Statute Approach, Conceptual Approach, and Comparative Approach to examine regulations, concepts, and comparisons of international practices. The results of the study indicate that the use of AI emotion detectors has the potential to violate the principles of personal data protection as stipulated in the PDP Law, especially regarding the transparency of biometric data processing, the basis of consent, and data security. In addition, there is a risk of algorithmic discrimination due to bias in the dataset, a lack of developer accountability, and the absence of specific regulations governing the use of AI in recruitment in Indonesia. These findings indicate the need for policy updates, the development of technical guidelines, and algorithm audit mechanisms to ensure fairness and non-discrimination in the selection process. This study concludes that the use of AI emotion detectors must be regulated more comprehensively to ensure a balance between technological efficiency and the protection of prospective employees' fundamental rights.



