 A recent case from practice: an entrepreneur connected an AI agent to handle incoming requests. The agent sent a client an email confirming terms that were not in the offer. The client accepted — and now the entrepreneur is obligated to execute the deal on the terms generated by the neural network. Why did this happen? Because the contract with the client did not include a clause stating that correspondence via the AI agent does not create legal obligations without double confirmation by a human. And the contract with the AI developer did not specify that all liability for the agent's actions falls on the user. Three rules I recommend to every freelancer and entrepreneur using AI agents: 1. Directly state in the contract with the counterparty: "automated messages do not constitute an offer/acceptance until confirmed by a human." 2. Limit the agent's authority in the technical specification: which actions it can perform and which it cannot. 3. Insure against liability for AI actions — the market is still forming, but precedents already exist. Have you encountered such a situation with AI agents? Write in the comments — I'll analyze your case. *Illustration: watercolor illustration, three ancient scrolls with red wax seals on a wooden table, soft muted tones #70666e #494253 #068488, textured watercolor texture*