Meta unveils Llama 4 models

PLUS: Midjourney launches V7

Together with

Howdy again. It’s Barsee, and welcome back to AI Valley.

Another day, another AI adventure.

Today’s climb through the Valley reveals:

  • Meta unveils Llama 4 models

  • Midjourney launches V7

  • OpenAI revises GPT-5 roadmap

  • Plus trending AI tools, posts, and resources

Let’s dive into the Valley of AI…

PEAK OF THE DAY

🦙 Meta unveils Llama 4 models: A new era in Multimodal AI

Meta just announced two new open-weight multimodal models—Llama 4 Scout and Llama 4 Maverick. Both are designed to handle text and images within a unified architecture, delivering top-tier performance in the field.

What's special here?

  • Mixture-of-Experts (MoE) design: The new Llama 4 models only uses a few "expert" parts per task, making them fast and compute-efficient.

  • Massive training: Jointly trained on vast amount of text, image, and video data for "broad" visual understanding.

What can they do?

Llama 4 Scout:

  • Utilizes 17B active parameters from a 109B total, distributed across 16 experts.

  • Supports a massive 10-million-token context window. It excels at document summarization, activity pattern analysis, and codebase reasoning, while running on a single GPU.

  • Beats Gemini 2.0 Flash-Lite and Llama 3.3 70B on visual benchmarks.

Llama 4 Maverick:

  • Also uses 17B active parameters, but draws from a much larger pool of 400B parameters spread across 128 experts.

  • Specializes in creative writing, multilingual understanding, and visual analysis.

  • Outperforms GPT-4o, Claude 3.7 Sonnet on various benchmarks, and matches DeepSeek v3 on coding—despite using less than half the parameters.

What’s powering this performance? 

Both models are trained using Llama 4 Behemoth, a giant internal model with 2 trillion parameters. It’s still unreleased but reportedly outperforms GPT-4.5 and Claude 3.7 in STEM reasoning tasks.

Where can I access them? 

They’re available now on llama.com, Hugging Face, and are integrated into Meta AI features on WhatsApp, Messenger, Instagram, and the Meta AI website.

Why does it matter? 

While OpenAI and Anthropic stick to closed APIs, Meta is going all-in on open source. If Scout and Maverick keep scaling, Meta could seriously challenge the dominance of closed LLMs—and change how AI progress is shared.

⚡️ Access up to 512 NVIDIA Blackwell GPUs with just a click

Image Source: Lambda

Scale your AI workloads with multi-node NVIDIA HGX B200 clusters on Lambda 1-Click Clusters™.

  • No long-term commitment necessary

  • No complex infrastructure management

  • 3x faster training, 15x faster inference

*This is sponsored

VALLEY VIEW

Generated by Midjourney V7

🎨 Midjourney launches V7: Midjourney has released V7, its first major update in nearly a year, now in alpha testing. The model boosts image quality, prompt accuracy, and coherence in tricky areas like hands and objects. It debuts “Draft Mode”, generating images 10x faster at half the cost, with enhancement options. V7 also adds voice interaction and requires users to rate 200 images to unlock personalization. Features like upscaling and retexturing are missing for now but expected soon.

🧠 OpenAI revises GPT-5 roadmap: OpenAI is now releasing both its o3 reasoning model and a next-gen o4-mini in the coming weeks, after initially scrapping o3’s public debut. The shift comes as GPT-5 faces a slight delay, now expected in a few months, to allow smoother integration of advanced reasoning capabilities and ensure infrastructure readiness.

🎮 Microsoft showcases AI-generated Quake II: Microsoft has released a real-time playable demo of Quake II generated entirely by its Muse AI model. The demo reimagines the 1997 classic with AI-generated visuals and interactions, accessible via web browser. While some elements appear blurry or less detailed than the original, it highlights Muse AI’s ability to dynamically render environments and gameplay in real time.

📱 OpenAI eyes $500M+ acquisition of io Products: OpenAI is reportedly in talks to acquire io Products, a startup co-founded by Sam Altman and legendary designer Jony Ive, in a deal valued at over $500 million. The company is developing an AI-powered personal device, and the move would mark OpenAI’s entry into the AI hardware space, combining Ive’s design expertise with OpenAI’s AI capabilities.

🧠 DeepSeek unveils new dual reasoning method for LLMs: DeepSeek, in collaboration with Tsinghua University, has introduced a new approach combining two powerful reasoning techniques to help LLMS tackle complex queries with greater accuracy and speed. The resulting DeepSeek-GRM models outperform existing methods by aligning outputs with human preferences and enhancing logical reasoning. DeepSeek plans to open-source them soon.

TRENDING TOOLS

THINK PIECES / BRAIN BOOST

VALLEY GEMS

1/

2/

3/

4/

SUNSET IN THE VALLEY

Thank you for reading today’s edition. That’s all for today’s issue.

💡 Help me get better and suggest new ideas at [email protected] or @heyBarsee

👍️ New reader? Subscribe here

Thanks for being here.

REACH 100K+ READERS

Acquire new customers and drive revenue by partnering with us

Sponsor AI Valley and reach over 100,000+ entrepreneurs, founders, software engineers, investors, etc.

If you’re interested in sponsoring us, email [email protected] with the subject “AI Valley Ads”.