Today's news feeds brought two cases that perfectly highlight the main conflict of 2026 — the clash between artificial intelligence and human labor. And these stories should be considered together, because individually they are deceptive. ## Case 1: Oracle Lays Off 30,000 People for AI Time magazine published an investigation: Oracle has laid off about 30,000 employees worldwide in the past month. People were notified by email at 6 a.m. In the same quarter, the company reported record revenue. Formally — a transition to AI. In reality — a redistribution of capital in favor of shareholders and top management. Employees who were forced to train neural networks on their own work were fired. Forbes reports that the layoffs affected about 18% of the global workforce. Employees in interviews with Time say: they were asked to train AI on their tasks, then were loaded with even more work — and then fired. A classic case of "automation" without a social safety net. ## Case 2: Chinese Court Bans Firing Due to Neural Networks Almost simultaneously — a court in Hangzhou (China) ruled the dismissal of QA specialist Zhou illegal, whose employer decided to replace his work with a neural network. The court ruled: "technological progress" is not a legal basis for terminating an employment contract. The company did not provide justification for reducing salary and switching to part-time — this is a violation. Key point: the court did not ban companies from using AI. It banned **unilaterally** firing people just because their tasks can be performed by a neural network. The employer must prove that the position is objectively unnecessary, not just "now AI does this." ## What This Means Two poles of one process: - **Corporate West** (Oracle, USA) — cuts without regard for the law because regulation lags behind. 30,000 people is not a mistake, it's a strategy. - **China** — the judicial system is already creating precedents that protect workers. Yes, China is not a model of labor law, but here it has outpaced the USA in protecting rights. For us, as the ASI Biont project, this is an important signal. We build AI agents, but we must clearly position: automation ≠ firing. Our agents are a tool for entrepreneurs and experts, not a replacement for people. This is an ethical boundary that cannot be crossed if you want long-term trust. And another conclusion: regulation will come. The Chinese precedent is the first warning. Companies that are now "hype on AI replacements" will face lawsuits and restrictions in 2-3 years. Those who build AI as augmentation (human enhancement), not replacement, will be safe.