AI News – May 6
OpenAI buys Windsurf for 3 billion dollars Gemini 2.5 Pro gets a big coding update Lightricks releases an incredible open source video model HeyGen’s Avatar 4 launches and sets a new standard and a new AR concept turns the world into a playable game.
Here’s today’s AI news. After Cursor was valued at $9 billion yesterday, their biggest rival in the AI coding space, Windsurf, is said to have been sold to OpenAI for the bargain price of 3 billion. We did say that the sale was imminent after we saw Windsurf change their logo last week.
This brings a whole new set of tools under the control of OpenAI and it will be fascinating to see what they do with the Windsurf product. With a combined $12 billion valuation for just two of the many AI developer tools, this is now a huge market. Google’s Gemini 2.5 Pro got an upgrade today to the I/O edition in anticipation of the Google I/O event that takes place in two weeks time.
Gemini 2.5 Pro now ranks number one on LMArena in coding and number one on the WebDev Arena leaderboard, keeping Claude 3.7 down in second place. Google say that this model is especially good at building interactive web apps and have already shown off some very impressive demonstrations.
The company behind one of our favourite AI video tools, LTX Studio is Lightricks and today they’ve introduced a new open source video model that is amazing everyone. The LTX-Video 13B is a 13 billion parameter open source AI video model with advanced features like multiscale rendering, multiple keyframe support and camera movement.
It can even be run locally on a consumer grade GPU. Lightricks claim that this model is up to 30 times faster than other comparable models and that also helps to bring the costs right down. With so many other big announcements due this week, this model might get a bit overshadowed, but it’s truly an impressive achievement.
Heygen has launched the latest image to Avatar model Avatar IV. It uses a new audio to expression engine to create hyper realistic videos from a single photo and script. It can capture tone, rhythm and emotion from the inputs to produce lifelike facial movements.
Of course being users of HeyGen ourselves we’re naturally biassed, but this does feel like a big step change in the capability of avatar models. Some of the examples that we’ve seen have been amazing. And finally a new augmented reality concept is turning street spaces into playable games.
The first public Dream park access point in Santa Monica transforms the street into an AR game using a Quest 3 headset. Personally I can’t wait until every outdoor space is turned into Super Mario World. With hundreds of people punching blocks in midair and scrambling for virtual coins, what could possibly go wrong?