OpenAI launches ChatGPT Pulse

PLUS: Meta introduces Vibes feed for AI-generated content

Together with

Howdy, it’s Barsee again.

Happy Friday, AI family, and welcome back to AI Valley.

Today’s climb through the Valley reveals:

  • OpenAI launches ChatGPT Pulse

  • OpenAI compares AI to human workers across 44 occupations

  • Google DeepMind debuts AI models for robots that can sort laundry

  • Meta introduces Vibes feed for AI-generated content

  • Plus trending AI tools, posts, and resources

Let’s dive into the Valley of AI…

NEBIUS

Image Credit: Nebius

Bring your AI workloads over and we’ll cover the migration cost and give you up to 3 months for free. Nebius delivers supercomputer performance with hyperscaler flexibility, so you can build and scale AI models faster.

*This is sponsored

THROUGH THE VALLEY

OpenAI has rolled out ChatGPT Pulse in preview for Pro users on iOS and Android. Pulse gives you a short daily briefing every morning through swipeable cards, pulling in memory, past chats, and optional links to Gmail and Google Calendar.

Image Credit: OpenAI

It will soon expand to Plus users before reaching everyone. Each night, Pulse runs background research to prep agendas, reminders, and travel notes, with filters to block sensitive topics. Briefs disappear unless saved, and users can guide future updates with feedback or a “Curate” prompt. Early testers say it works best once you set what you want it to track.

Why does it matter?

Pulse is OpenAI’s first real move toward proactive AI agents that don’t just respond, but also plan and act. By blending memory, integrations, and safety checks into a daily tool, it shows how ChatGPT could shift from being a reactive chatbot into a personal assistant that fits into everyday routines.

OpenAI has introduced GDPval, a benchmark to check if AI models can match professional-level work across 44 different jobs. The test compared GPT-5, Claude Opus 4.1, Gemini 2.5, and Grok 4 with industry experts.

Image Credit: OpenAI

The benchmark covered 1,320 tasks from professionals with an average of 14 years of experience across nine fields, including healthcare and finance. Opus 4.1 came out on top with a 47.6% win rate, doing especially well in visual presentations. GPT-5 led in technical accuracy, and OpenAI noted that its performance has tripled compared to GPT-4o in just 15 months.

Why does it matter?

GDPval shows that AI models are only now starting to match human professionals on some tasks. But the rapid pace of progress suggests that AI could soon go far beyond what today’s workplace requires.

Google DeepMind has launched two new models in its Gemini Robotics family. Gemini Robotics-ER 1.5 works as the planner, building multi-step strategies with strong reasoning, vision, and tool use (even pulling info from Google Search when needed). Gemini Robotics 1.5 then carries out those plans, turning instructions and visual input into motor commands so robots can actually perform the tasks.

Image Credit: Google Deepmind

This two-model setup fixes a common issue in robotics, where one system had to both plan and act, often leading to mistakes. Now, with ER 1.5 handling the planning and 1.5 doing the execution, robots can follow multi-step instructions much more smoothly, from sorting recycling under local rules to performing complex physical tasks.

The models are designed to work across different types of robots. Gemini Robotics-ER 1.5 is already available through the Gemini API in Google AI Studio. (Read more for more details here)

Why does it matter?

By splitting planning from execution, DeepMind is tackling one of robotics’ toughest challenges: turning natural language into accurate, reliable actions. This approach could speed up progress toward flexible, general-purpose robots that safely operate in real-world environments.

Meta has rolled out Vibes, a new feature in the Meta AI app that lets people create and share short AI-generated videos. It’s in early preview and reaching select users first.

Image Credit: Meta

With Vibes, you can start fresh, remix existing clips, or add visuals, music, and styles. Finished videos can be shared in the Vibes feed, sent directly to friends, or posted to Instagram and Facebook Stories and Reels. On Instagram, tapping a Meta AI video opens it in the Meta AI app for easy remixing.

According to Meta, Vibes runs on advanced generative models and uses a personalized feed built around creativity and remixing. The company is also partnering with artists and creators to improve the feature as it rolls out to more users.

Why does it matter?

Vibes pushes Meta AI beyond simple chatbot features into creative content and social sharing. By weaving it into Instagram and Facebook, Meta is betting that AI-powered video remixing could fuel a new wave of short-form content.

TRENDING TOOLS

  • GeoSpy AI > Analyzes your photos with advanced AI to pinpoint possible locations where they were taken

  • Perplexity Search API > A new API that delivers raw web search results for AI apps

  • Browser Use > It can get past restrictions that your traditional web scraper can't

  • Exa-code > A major move toward ending LLM code hallucinations.

THINK PIECES / BRAIN BOOST

THE VALLEY GEMS

What’s trending on social today:

THAT’S ALL FOR TODAY

Thank you for reading today’s edition. That’s all for today’s issue.

💡 Help me get better and suggest new ideas at [email protected] or @heyBarsee

👍️ New reader? Subscribe here

Thanks for being here.

REACH 100K+ READERS

Acquire new customers and drive revenue by partnering with us

Sponsor AI Valley and reach over 100,000+ entrepreneurs, founders, software engineers, investors, etc.

If you’re interested in sponsoring us, email [email protected] with the subject “AI Valley Ads”.