MiniMax M2.7 - Is Self-Recursive Model Improvement Here?
News

MiniMax M2.7 - Is Self-Recursive Model Improvement Here?

calendar_today Date:
schedule Duration: 0:59
visibility Views: 299
database
Summary Report

Minimax just released M2.7, a model they say helped build itself. Over 100 autonomous cycles, it analysed its own failures, rewrote its own code, and decided what to keep. So recursive self-improvemen

Minimax just released M2.7, a model they say helped build itself. Over 100 autonomous cycles, it analysed its own failures, rewrote its own code, and decided what to keep. So recursive self-improvement is here. Minimax M2.7 is a new flagship model from China. It is said to have handled 30-50% of its reinforcement learning workflow during development, evaluating results, updating its own skills, and iterating without human intervention. The benchmarks are very impressive too, with near state-of-the-art scores in SWE-bench, and near the top on several agentic coding tasks. Minimax has been quietly climbing. M2.5 was already solid. Now they're shipping a model that sits alongside Sonnet 4.6 on agentic benchmarks, and if their self-improvement claims are valid, there's no reason it should stop there. Daily AI Roundup - keeping you across the things that matter in AI.