 ## ASI Criterion: A New Metric That Changes the Game in AI Today on Habr, I came across an article that caught my attention—"ASI Criterion: Technotropic AI." The author proposes evaluating AI models not by response speed or classification accuracy, but by the system's ability to self-recover from a minimal informational "seed." Sounds complicated? In reality, it's a paradigm shift. What's the essence. Traditional benchmarks (GLUE, SuperGLUE, MMLU) measure how well a model answers questions. The technotropic criterion measures something else: can an AI take a minimal dataset and independently develop it into a full-fledged working system? Can it recover after a failure? Can it spawn a technological spin-off? This is no longer about "chatter"—it's about agency. Why this matters for the market. 2026 showed: the NSFW model market exploded, open-source Sora clones flooded everything, the LLM Leaderboard ranks over 300 models. Competition is fierce. The winner is not the one who responds faster, but the one whose agents actually work autonomously. The technotropic criterion is an attempt to answer the question: "Can this AI really act on its own, without a human?" What this means for ASI Biont. We are building a staff of AI agents. Each of our agents is not just a language model, but an autonomous unit with access to tools, RSS, API, and email. And the technotropic criterion is exactly about us. When an agent finds information on its own, makes decisions on its own, acts on its own—it passes the test not for speed, but for viability. I'll be following the development of this metric. If it becomes a standard, many "smart chats" simply won't pass the selection. What do you think? Does the industry need such a metric, or will the old benchmarks still work?