OpenAI Gets Text-To-Speech. NotebookLM Gets Mind Mapping. Claude Gets Web Access. A Pika Sneak Peek!

AI New For March 20.
OpenAI finds its voice with text to speech, NotebookLM gets mind mapping capabilities, Claude gets real time web search, and Pika Labs gives a sneak peek of its latest feature. Here’s today’s AI news. OpenAI has introduced three new audio models through their API, two for speech-to-text and one for text-to-speech.

These models allow developers to build AI agents that can speak with natural voice interactions and open up whole new frontiers of possibilities when working with the OpenAI API. Real time audio processing capabilities in applications should produce some fascinating results.

You can play with the new Voice AI model at OpenAI.FM for free. Anthropic has released real time web searching in Claude. Not having the very latest information to hand has been a bit of a barrier for Claude for some users, but now the whole Internet is available as a source. Initially this feature will only be available for paid users in the US, but the rollout should make its way to the free plan and more countries soon.

Google’s Notebook LM has a great new Mind Maps feature. This allows users to convert their notes and sources into visual diagrams, aiding in the organisation and comprehension of complex information. This feature should really transform the way users interact with the information produced by Notebook.

The ability to visualise complex data in this way is likely to become a big selling point for the platform. Finally, Pika Labs have given a sneak preview of an upcoming feature that will allow users to manipulate any character or object in a video. Simply add your reference video and then prompt this new Pika feature to transform a part of the scene while keeping everything else perfectly intact.

It’s only available to Pika Creative Partners at the moment, but we are very much looking forward to playing with it when it gets a full release.

more insights