The World Is Regulating AI in the Workplace. Only One Country Has Given Workers Legal Ground to Stand On.
A court in Hangzhou just established something that legislators in Washington, Brussels, and Seoul have so far refused to codify: a principle that says a company's decision to adopt AI is a business choice, not an act of God, and that workers can't be made to absorb the cost of it alone.
The ruling came down on April 28 from the Hangzhou Intermediate People's Court, which upheld a lower court's decision that a tech firm had unlawfully terminated a quality assurance supervisor after AI automated his role.
The worker, identified only as Zhou, had been earning 25,000 yuan per month, roughly $3,655. The company reassigned him with a salary cut to 15,000 yuan. He refused and was dismissed. He won at arbitration. The company sued. It lost at the district level. It appealed. It lost again.
The court found that "the termination grounds cited by the company did not fall under negative circumstances such as business downsizing or operational difficulties, nor did they meet the legal condition that made it 'impossible to continue the employment contract.'"
The legal question at issue was whether AI-driven automation qualifies as a "major change in objective circumstances" under China's Labor Contract Law — a standard that typically refers to significant events like company relocation or mergers. The court said it doesn't.
This wasn't an isolated ruling. A data mapping worker in Beijing who was replaced by AI and dismissed also won his case through arbitration last year. That panel ruled that the tech company's decision to switch to AI was a business choice rather than an uncontrollable event, and that by terminating the employee contract, the company was shifting the cost of its own technological transformation onto the worker.
Both rulings land at a moment of serious pressure on the global tech labor market. More than 78,000 technology workers were laid off in the first four months of 2026, and nearly half of those cuts were directly attributed to AI replacing human roles.
Meta cut approximately 8,000 positions in May alone. Oracle eliminated between 20,000 and 30,000 employees in March. Block's chief executive stated that the company's reduction from 10,000 to 6,000 employees was driven by growing AI capabilities.
The Hangzhou precedent matters because of what it requires from employers who can't meet that legal threshold. They must offer genuine retraining, reassignment at comparable pay, or full wrongful termination compensation.
Pan Helin, an economist and member of an expert committee under China's Ministry of Industry and Information Technology, argued that while AI-driven job displacement may be inevitable, companies must ensure fair treatment during transitions, including reasonable reassignment arrangements and adequate compensation for layoffs.
The interesting wrinkle in China's position is that it's not anti-AI. China launched a months-long enforcement campaign against AI misuse in 2026, targeting deepfakes, fraud, and disinformation, and has introduced mandatory labeling standards for AI-generated content and new regulations governing AI chatbots and virtual human services.
The government's approach is not to restrict AI but to regulate its applications while ensuring that the economic benefits don't come at the expense of social stability. China's urban youth unemployment rate reached 15.3 percent in March, and the political sensitivity of mass layoffs in an economy already struggling with deflation and weak consumer demand makes these rulings as much about maintaining order as about interpreting contract law.
That context is worth holding in mind before reading the Hangzhou ruling as purely a worker-rights victory. It's also a stability instrument. But the legal mechanism it creates is real regardless of the motivation behind it.
What the EU actually does and doesn't do
The EU AI Act is the most comprehensive AI regulatory framework outside China, and it does address employment.
Under the Act, AI systems used for recruitment, selection, targeted job advertising, candidate evaluation, performance monitoring, and decisions about contract terms or termination are classified as high-risk, triggering mandatory risk assessments, technical documentation, bias testing, human oversight, transparency disclosures, and continuous monitoring. Full high-risk system obligations are scheduled to take effect in August 2026, with compliance requiring worker notice, human oversight, and logging processes.
What the EU Act doesn't do is tell a company it can't reduce headcount because AI has made certain roles redundant. While the EU AI Act does not ban job displacement by AI outright, it does require employers to consult with works councils before implementing AI, and in some jurisdictions, obtain their agreement.
That's procedural protection, not substantive protection. A company that checks every box — notified the works council, conducted a risk assessment, documented its oversight processes — can still eliminate the role.
Belgium's Collective Bargaining Agreement No. 39 goes further, requiring employers to consult worker representatives on the social consequences of any new technology with significant collective effects on employment. But consultation isn't a veto, and it isn't the kind of substantive legal constraint that Chinese courts have now applied.
South Korea's new framework
South Korea's AI Basic Act took effect on January 22, 2026, making it the second country, after the EU, to enact a comprehensive national AI regulatory framework.
The law defines "High-Impact AI" broadly to include systems used for judgments or evaluations affecting individuals' rights or obligations, which explicitly covers employment-related determinations such as resume screening, candidate ranking, skills assessments, performance evaluation algorithms, and compensation or disciplinary decision tools.
Organizations deploying high-impact AI in employment must conduct risk assessments, maintain human oversight, notify users of AI-generated decisions, and document operational data. Again, the obligations here are about process transparency and oversight, not about constraining the employment decision itself.
The AI Basic Act classifies AI use only in recruitment as high-impact, leaving worker management largely unregulated, unlike the EU AI Act. A Korean company that uses AI to automate a function and then eliminates the role currently has no equivalent of the legal constraint that Zhou's case established in China.
The U.S. situation: disclosure not protection
The United States has no federal law that substantively limits an employer's ability to terminate workers because their functions were automated. What it has is a pair of disclosure proposals that haven't passed, and a patchwork of state laws that address transparency in hiring decisions.
The AI-Related Job Impacts Clarity Act, introduced by Senators Mark Warner and Josh Hawley in November 2025, would require publicly traded companies and federal agencies to file quarterly reports to the Department of Labor identifying how many employees were laid off because their job functions were automated by AI, how many new positions were created because of AI adoption, and how many vacancies were left unfilled because AI replaced the function.
The bill has been referred to the Senate Committee on Health, Education, Labor and Pensions. It's unclear how quickly it may move, and even less clear whether the current administration would approve it, given calls for minimal federal AI regulation.
The No Robot Bosses Act of 2025, introduced in the House in December 2025, would mandate human oversight and require disclosure when AI tools are used in employment decisions, for employers with 11 or more employees. It hasn't passed either.
At the state level, Illinois requires employers to notify workers if AI is used in hiring, discipline, or discharge decisions. Colorado's AI Act, taking effect in mid-2026, mandates risk management policies and annual assessments of AI's impact on employment decisions. Neither creates a legal floor beneath which AI-driven dismissal can't go.
The gap that remains
The structural difference between what China's courts have established and what every other major jurisdiction currently provides is the difference between process rights and substantive rights.
Disclosure laws, risk assessments, transparency mandates, and works council consultations tell workers that AI was involved in the decision and require companies to document their reasoning. They don't tell companies that the reasoning isn't good enough.
What Hangzhou established, and what Beijing arbitrators had already said, is that a company's strategic decision to automate is categorically different from an unforeseeable external event that makes a contract impossible to perform.
That distinction matters enormously in employment law. It removes the escape hatch that automation might otherwise provide from redundancy compensation obligations, and it requires companies to genuinely exhaust alternatives before termination becomes defensible.
As Wang Xuyang, a lawyer at Zhejiang Xingjing law firm, told Xinhua: "Technological progress may be irreversible, but it cannot exist outside a legal framework."
That sentence describes a legal reality that currently exists in one country.
Whether it spreads depends on whether legislators elsewhere are prepared to move from transparency requirements to something that actually constrains outcomes, and whether courts in other jurisdictions, working with existing labor statutes, decide to draw the same line that Chinese courts have drawn without waiting for them to do so.