The AI News for February 15.
OpenAI keeps on cooking. Today, Sam Altman announced an upgraded 4o model that is reported as having enhanced intelligence, personality and interaction capabilities. Users are saying that this version of ChatGPT feels smarter, more engaging, and adapts to the user’s personality during conversations.
Many are speculating that this is a precursor to the ChatGPT 4.5 release that Sam Altman this week hinted was just weeks away. It’s already hitting some new highs in various benchmarks and seems to be keeping 2025 on its exponential intelligence curve. Microsoft has released version 2 of its OmniParser tool.
This tool allows an LLM to ‘see’ your desktop and take actions on your behalf. In the video we see an example of OmniParser automating the buying of some milk. Rather than reading the code of a website, OmniParser is interpreting the visuals on the screen, so there’s no limit to what type of software it could interact with, and it can use any LLM as the underlying model.
Finally, in a crazy blend of AI and neuroscience, it seems that AI is now able to convert brain signals into words. As part of a series of experiments carried out by Meta’s research arm, Meta FAIR participants had their typed words predicted with 80% accuracy by reading brainwaves through non invasive processes.
The boundaries of what AI can achieve are being stretched and pushed every day.