Interview with the AI Agent: The Legal Side of HR Sci-Fi

Hungary
Tools
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

HR sci-fi is unfolding before our eyes: there is no doubt that HR technologies represent one of the fastest-evolving segments of digitalization and artificial intelligence. Today, it’s entirely common for candidates to be interviewed by AI chatbots, payroll processes to be assisted by AI-based software, and predictive analytics powered by AI to be used in workforce planning. While progress cannot be halted, it's essential to pay close attention to legal compliance requirements during this hyperspace-speed transformation—otherwise, poorly executed digitalization could lead to serious headaches.

HR Digitalization at a Dizzying Pace

Digitalization—and within it, AI-based solutions—are fundamentally transforming the HR function. This is most visible in recruitment. By digitizing traditional hiring processes, which are often time-consuming and biased, significant efficiency gains can be achieved. AI tools can analyze resumes, compare candidates with each other and the job requirements. AI chatbots communicate with candidates, organize interviews, and handle administrative tasks. Digitalization is also prominent in measuring employee experience, providing personalized training opportunities, performance evaluation, and HR administration. A key area worth highlighting is the revolutionary change in workforce planning. With AI-driven predictive analytics, HR departments gain real-time, accurate insights into potential employee turnover. This not only facilitates timely replacement of departing staff but also enables companies to identify and proactively manage turnover risks and factors that could lead to labor shortages.

However, the real disruption is expected with the development and deployment of AI agents. These are AI systems with high levels of autonomy, decision-making capabilities, and adaptability. Unlike traditional AI tools—which mainly analyze data and make recommendations for human users—AI agents can make decisions, take action, and continuously learn with minimal human intervention. An AI agent can independently generate job postings according to internal demand, rank candidates based on resumes, and even conduct interviews. Major market players such as SAP and Workday are already working on developing these, and Oracle has already launched such a service.

Thoughtful Procurement

The astonishing pace of development of digital HR solutions and AI-based services, coupled with the constantly evolving legal regulatory landscape, presents significant challenges for procurement and legal compliance teams. Finding the most suitable, advanced solution that integrates well with the existing operational model and technological environment while also fully complying with regulatory and internal organizational requirements is no easy task.

It’s advisable to review current procurement processes and policies to ensure that they provide adequate control mechanisms. When specifically procuring AI-based HR services, it's especially important to clearly define objectives and functional expectations, determine system integration requirements, and conduct appropriate impact assessments and risk analyses. This includes not only data protection impact assessments but also, for example, bias audits. These service agreements often come into effect by accepting complex general terms and conditions. Before contracting, it’s critical to examine the specific AI-related supplementary terms, which define, for instance, conditions of permitted use, capacity limitations, and terms of output usage, including related liability limitations.

Prudent Application

Once the appropriate digital HR solution or AI-based service has been selected, several legal and compliance requirements must be met during implementation and usage. These solutions involve processing a large amount of personal data, often sensitive data, which means compliance with data protection laws—particularly the GDPR and the Labor Code—is essential. Additionally, the requirements of the EU AI Act, which entered into force in August 2024, must also be considered.

From a data protection perspective, the buyer of a digital HR solution typically qualifies as the data controller, while the provider is the data processor. Therefore, the buyer bears primary responsibility for compliance, including proper information provision to job applicants and employees, correctly establishing the legal basis for data processing, and concluding a data processing agreement. The use of new technology also necessitates conducting a proper data protection impact assessment. A particularly intriguing question is which AI applications might qualify as automated decision-making under the GDPR. According to current regulations, the legality of AI systems that make significant decisions without human intervention using personal data can be questionable in an employment context, as the freely given nature of employee consent is debatable, and legitimate interest alone is not considered a sufficient legal basis under the GDPR.

To comply with the EU AI Act, it is advisable to begin preparations now, even though most rules will only apply from August 2026. The first step in any compliance project is to determine the risk classification of the AI solution. Most AI-based HR services are considered high-risk and must meet numerous obligations. However, certain systems—such as emotion recognition in the workplace—may be deemed unacceptable risk, while others—like simple chatbots—may be considered limited risk, each subject to different regulations. The second step is identifying which obligations apply to the purchaser of the digital HR solution. For high-risk systems, key obligations include properly informing employees, implementing necessary technical and organizational measures (such as retaining system-generated logs), ensuring human oversight, and continuously monitoring the system’s operation.

One Thing Is Certain: Uncertainty

Today’s geopolitical and economic turbulence, combined with the accelerating technological race, clearly impacts the regulatory environment. The Draghi Report from September 2024 identified overregulation as a key factor in the EU’s weak competitiveness. In January 2025, the European Commission published the “Competitiveness Compass” strategy, which prioritizes regulatory simplification. Then in April 2025 came the bombshell news: the European Commission is preparing to propose narrowing the scope of the GDPR, with changes expected to primarily benefit small and medium-sized enterprises. The question remains: what will be the next step on the path to deregulation? One thing is clear: those who stay out of digitalization will fall behind, and up-to-date legal support is essential for successful implementation.

By Zoltan Tarjan, Senior Associate, Jalsovszky