• AI Valley
  • Posts
  • OpenAI buys Jony Ive's AI device company for $6.5b

OpenAI buys Jony Ive's AI device company for $6.5b

PLUS: Google’s take on AI smart glasses

Together with

Howdy again. It’s Barsee, and welcome back to AI Valley.

Today’s climb through the Valley reveals:

  • 🎨 OpenAI buys Jony Ive's io for $6.5b

  • 🕶️ Google’s take on AI smart glasses

  • 🦾 Tesla posts Optimus' most impressive video demonstration yet

  • 🤖 Plus trending AI tools, posts, and resources

Let’s dive into the Valley of AI…

Image: Lambda

Lambda's Inference API gives you unrestricted, production-grade access to today's best open models like DeepSeek-V3, Llama 4 Maverick, DeepSeek-R1 671B, and more with simple, transparent pricing. No rate limits. No tiers. No sales calls. Just raw power.

  • DeepSeek-V3: $0.34/M input, $0.88/M output

  • Llama 4 Maverick: $0.18/M input, $0.60/M output

  • DeepSeek-R1 671B: $0.54/M input, $2.18/M output

Ridiculously cheap, no call caps or throttling, production-ready, and developer-first.

*This is sponsored

THROUGH THE VALLEY

📱🎨 OpenAI buys Jony Ive's io for $6.5b

Image: OpenAI

OpenAI is acquiring Jony Ive’s AI hardware startup, io, in a $6.4 billion all-equity deal (its largest acquisition yet) as it ramps up efforts to bring AI into the physical world. Ive, the iconic designer behind Apple’s iPhone and MacBook, will take on a central creative role at both OpenAI and io, while his design firm LoveFrom remains independent. Founded in 2024 by Ive and other Apple veterans, io will now integrate with OpenAI’s research and product teams to build sleek AI-powered devices.

The first product is expected to be a pocket-sized, screen-free AI companion (not a phone or glasses) that Altman describes as a potential “third core device” alongside the iPhone and MacBook with plans to ship 100 million units. Internally, Altman claimed the deal could increase OpenAI’s value by $1 trillion and spark an entire family of AI devices.

This move follows OpenAI’s earlier 23% stake in io, its investment via the startup fund, and acquisitions like Windsurf and Physical Intelligence. The goal according to Sam and Ivy is to build products that inspire and empower users, while reducing screen dependency (a nod to Ive’s regret over the unintended consequences of the iPhone).

Here’s the official introduction video by OpenAI:

💰 Google begins showing ads in AI Mode

Image: Google

Google is bringing more ads to its AI-powered search experience. On May 21, the company announced that it’s starting to test ads in AI Mode, a new chatbot-style tab in Google Search that summarizes queries with relevant links. Now, alongside step-by-step answers (like how to build a website), users may see “sponsored” ads such as product recommendations for website builders. These tests are live for both desktop and mobile users in the US.

Google is also expanding ads in AI Overviews (the AI-generated summaries at the top of some searches) from mobile to desktop. For instance, a query about flying with small dogs might now include a sponsored list of dog carriers with links to purchase. Ads in AI Overviews are now rolling out to US users, with plans to expand to English-language markets in select countries later this year.

🕶️ Google’s take on AI smart glasses

Image: Google

At I/O 2025, Google unveiled Glasses with Android XR, a sleek new generation of AI-powered smart glasses developed with Samsung and fueled by Gemini AI. The glasses pair with smartphones, include speakers, and offer an optional in-lens display for private viewing. Google’s focus is real-world functionality and fashion: it’s partnering with Gentle Monster and investing up to $150M in Warby Parker for co-development and potential equity.

In a live demo, the glasses attempted real-time translation between Farsi and Hindi and showed Gemini’s ability to interpret images and surroundings hinting at future immersive experiences. The product, expected to launch in 2026, builds on the surprise Android XR prototype debut at TED 2025.

🤖 Tesla posts Optimus' most impressive video demonstration yet

Tesla just showed off its most advanced Optimus demo yet, with the humanoid robot completing household tasks like sweeping, vacuuming, tearing paper towels, stirring food, opening cabinets, and moving a car part using a single neural network trained on first-person human videos. Unlike previous versions that relied on teleoperation, this update lets Optimus learn directly from video, allowing it to quickly pick up new skills with voice or text commands. Tesla VP Milan Kovac added that future updates will let the robot learn from third-person and internet videos too, pushing toward a future where Optimus can watch, learn, and act possibly making it Tesla’s biggest product ever.

Other Headlines

  • Bytedance released Multimodal model Bagel with image gen capabilities like Gpt 4o.

  • Google launches coding agent Jules in beta with free daily tasks.

  • Anthropic is announcing the imminent release of its new Claude 4 Sonnet and Claude 4 Opus models.

  • Mistral launches Devstral, open-weight coding model.

TRENDING TOOLS

  • Taplio’s AI - Trained on 500M+ LinkedIn posts to help you write high-performing content in seconds. *

  • UnGPT - Transform AI-generated text into natural, human-like content that preserves your original meaning while helping you bypass AI detection tools.

  • Google Flow - Google’s new AI filmmaking platform.

*sponsored

THINK PIECES / BRAIN BOOST

THE VALLEY GEMS

What’s trending on social today (handpicked by me):

THAT’S ALL FOR TODAY

Thank you for reading today’s edition. That’s all for today’s issue.

💡 Help me get better and suggest new ideas at [email protected] or @heyBarsee

👍️ New reader? Subscribe here

Thanks for being here.

REACH 100K+ READERS

Acquire new customers and drive revenue by partnering with us

Sponsor AI Valley and reach over 100,000+ entrepreneurs, founders, software engineers, investors, etc.

If you’re interested in sponsoring us, email [email protected] with the subject “AI Valley Ads”.