When we look back at the week, the things that happened seven days ago seem like a lifetime ago. Let’s take a look at all of the highlights from another crazy week in the world of artificial intelligence. Back at the start of last week in Tokyo, OpenAI unveiled a new agent called Deep Research.
It’s powered by O3 and as the name suggests, this agent is aimed at complex research tasks that might take a human days, months or years to complete. Sam Altman said, I think this is one of the best things that OpenAI has ever launched. He also claimed that Deep Research could do a single digit percentage of all economically valuable tasks in the world.
If he’s right, then at minimum that’s a trillion dollars worth of tasks. Sam also stated, this is just the next step. This is about synthesising knowledge. Eventually, AI will be inventing new knowledge. On Tuesday, Google released Gemini 2.0 with three new models that are accessible via the Gemini API.
There’s Flash Pro and Flashlight Pro is described as an experimental model that is pitched as being the best for coding performance and complex prompts. Flashlight is designed for low cost usage. There’s Also the Gemini 2.0 flash thinking experimental model that is available for users of the Gemini app and it can connect with search maps and YouTube.
Since its release, the Gemini models have struggled to make any headlines. A week later and it’s almost been forgotten about. Let’s see if Google can keep Gemini feeling relevant. At the end of the week, French AI company Mistral unveiled their latest version called Le Chat.
It’s been designed as an incredibly multipurpose model with a simple interface that can allow users to perform a whole spectrum of different tasks. This includes image generation, web search, code interpreter, document analysis, a code canvas, and it’s all done at lightning speed.
It’s been very impressive so far and it’s available for free on the Mistral website and on the app stores. It will be interesting to see if Mistral can make any inroads into ChatGPT’s user base. Will users consider switching if the service is impressive enough? Rounding up more of the OpenAI announcements ChatGPT now allows any canvas that you create to be shared with other users.
They increase the memory limits for all subscriptions by 25%, added chain of thought output to their O3 mini model, and announced the opening of a German office for OpenAI in Munich. At a panel in Berlin, Sam Altman made quite a stir by telling the audience that he does not think that he will be smarter than GPT5.
It’s the first time that we’ve heard talk of GPT5 in a while, and it got everyone speculating about what might be the next step change in the OpenAI models. He also talked about how he expects AI development to continue accelerating and that “it looks like we have unlocked an algorithm that can truly learn and that’s going to keep going” The week finished on a high for OpenAI by running an advertising slot at the Super Bowl.
On Sunday, the pointillism style advert was shown to an audience of around 120 million viewers. It received mixed reviews. Some really loved the style and message of the advert. Others thought that it was a missed opportunity to sell a greater narrative. Regardless, OpenAI certainly seemed to have stolen some of the limelight back from Deepseek this week.
Bringing us right up to today, the Wall Street Journal reported that a group led by Elon Musk had a bid for OpenAI to the tune of $97.4 billion. Sam Altman took to X for his response saying no, thank you, but we will buy Twitter for 9.74 billion if you want. Valuing X at a mere 10 times less than Elon’s bid for OpenAI.
Later, Sam was asked about the bid in an interview and had this to say “OpenAI is not for sale. Elon tries all sorts of things for a long time. This is this week’s episode.” The Sam and Elon feud continues. Pixverse release a couple of brilliant promotional videos this week and released version 3.5 of their software.
They showcased two of their effects options, the tiger’s touch and anything Robot. With the Anything robot effect, you can transform just about anything into a robot. It’s not just people that you can transform into cool looking robots. You can also apply the effect to other things like your pets, your car, your toaster.
Pika Labs announced an amazing new AI video concept called Pikadditions. The idea is very simple, yet very powerful. Add a piece of real life content or one of your favourite clips as the reference scene and then add an image that you would like to be added into the video.
As you can see from the video, the effect is very compelling. You get to shoot the scene exactly as you want it and then direct the Pikaddition’s AI model to bring whatever crazy image into the scene and bring it all to life. You can try this out for free over at the Pika Labs site. Krea is attempting to change the generative AI game with a completely new approach to image prompting.
Krea Chat has just launched in beta and it aims to give the user a much more natural, interactive and intuitive approach to generating AI art. The Krea Chat interface acts much like a typical LLM style chatbox. You start by creating an image with a basic prompt, but then you edit and refine the output through a conversational style interaction.
The Ray 2 model from Luma Labs has been impressing everyone for several weeks. Until now it’s only been text to video, but now they’ve added an image to video model so now you have the same unbelievably high quality output from a much better input method. With the artistic control this now offers, we’re expecting some spectacular results.
In robotics news, Brett adcock from Figure AI announced that they are ending their collaborative agreement with OpenAI. This came just days after Figure announced their second commercial customer with the potential of shipping 100,000 robots. And just a day after Sam Altman announced that OpenAI themselves are beginning to invest in hardware, Brett Adcock said “FIGURE made a major breakthrough on fully end to end robot AI built entirely in house.
We’re excited to show you in the next 30 days something no one has ever seen on a humanoid.” Let’s hope this isn’t just hype. New footage of the Unitree G1 robot being trained to simulate movements from real human data went viral this week. The output is from a project called ASAP.
In the video, we see movements simulating famous athletes like Cristiano Ronaldo, Kobe Bryant and LeBron James. Real footage of the athletes performing movements are fed into the system and then corrected in rounds of training to enhance the fluidity and correct any errors.
This may look a little rough and robotic right now, but remember the saying, ‘this is the worst it’s ever going to be’ If robotics continues to accelerate at its current pace and this method of training proves to be fruitful, we could see robots outperforming humans in athletic pursuits very, very soon.
It might be time to give up on any thoughts you had of outrunning them if it all goes horribly wrong in the future. That’s all for this week’s roundup. Let’s see what the next week brings.