- AI Valley
- Posts
- Nvidia's next generation AI chips
Nvidia's next generation AI chips
PLUS: Boston Dynamics major leap in humanoid mobility
Together with
Howdy again. It’s Barsee, and welcome back to AI Valley.
Another day, another AI adventure.
Today’s climb through the Valley reveals:
Nvidia's next big moment has arrived
Boston Dynamics shows off another major leap in humanoid mobility
LG launches Korea's first open-source reasoning model
Plus trending AI tools, posts, and resources
Let’s dive into the Valley of AI…
PEAK OF THE DAY
Nvidia's next big moment has arrived
Nvidia CEO Jensen Huang unveiled key updates on their next generation AI chips, reasoning models, and robotics as part of his announcements at the company’s annual GTC conference on Tuesday.
Here’s what unveiled: AI Chips:
Blackwell Ultra chips are set to ship in the second half of this year, offering improved performance over its predecessor with more memory to support larger models and agentic AI tasks.
The Vera Rubin chip, is expected in late 2026, with twice the performance of Blackwell, handling 50 petaflops for inference with 288GB of fast memory.
In 2027, Rubin Ultra will debut featuring multiple GPUs for enhanced performance. Nvidia’s next architecture, Feynman, is scheduled for 2028, continuing the company's tradition of naming chip families after scientists.
Reasoning Models:
Introduced Dynamo, a free software for accelerating and scaling reasoning processes in AI, and the Llama Nemotron family of reasoning models, built to help developers and enterprises create AI agents.
Robotics:
Launched Isaac GR00T N1, the first open and customizable foundation model for humanoid reasoning and Nvidia Cosmos, a new world model offering enhanced control for AI-driven environments.
Huang showcased the Newton open-source physics engine, built with Google DeepMind and Disney Research, followed by a live demo featuring Blue, a small AI-powered robot inspired by Disney’s Wall-E.
Personal AI Supercomputers:
Nvidia also unveiled the DGX Station and DGX Spark (previously Project DIGITS), both powered by Blackwell chips. These systems enable developers, researchers, and students to build, fine-tune, and run AI models directly from their desktops.
Why it matters:
Nvidia is making it easier and faster for developers to build smarter AI. With more powerful chips, tools for creating AI agents, and breakthroughs in robotics, these updates are shaping the future of what AI can do in the real world.
Intercom is building the future of AI customer service
Intercom’s AI agent, Fin, has resolved over 15 million queries over chat and email. It’s also G2’s #1 ranked AI agent in the market.
But with Intercom's latest AI advancements, Fin is about to take customer service to the next level across more channels and on any platform.
Interested in learning more? Register for this upcoming demo with live Q&A on April 3rd to get a deeper look at Fin's capabilities.
*This is sponsored
VALLEY VIEW
Boston Dynamics has wowed us again with its latest Atlas robot video, showcasing smooth full-body movements like walking, cartwheels, and even breakdancing — all powered by reinforcement learning using motion capture and animation. The company is deepening its AI collaboration with NVIDIA, integrating the Jetson Thor computing platform to run complex multimodal AI models, enhancing Atlas's capabilities. The future of robotics is getting more agile, intelligent, and impressive every day.
Google has introduced two new features for its Gemini AI assistant: Canvas and Audio Overview. Canvas offers an interactive workspace where users can collaboratively draft and edit documents or code in real time, making content adjustments easier. Audio Overview turns written materials into engaging audio summaries, presented as a podcast-style conversation between two AI hosts. Both features are currently available to Gemini and Gemini Advanced subscribers.
Adobe recently launched the Adobe Experience Platform Agent Orchestrator, aiming to enhance marketing and customer experience workflows. This new tool allows businesses to create, manage, and coordinate AI agents from both Adobe and third-party sources. The platform also introduces 10 specialized AI agents designed to optimize various tasks such as website management, content production, and audience targeting.
Stability AI has unveiled its new AI model, Stable Virtual Camera, which transforms 2D photos into immersive 3D scenes and videos with realistic depth and perspective. The model generates "novel views" from up to 32 images, allowing users to specify dynamic camera paths like Spiral, Dolly Zoom, or Pan. It supports multiple aspect ratios and up to 1,000 frames per video. You can access the model via HuggingFace.
Google has announced TxGemma, a suite of ‘open’ AI models aimed at revolutionizing drug discovery by making the process faster and more efficient. These models can analyze text and the structures of molecules, chemicals, and proteins. Researchers can use TxGemma to predict critical drug properties such as safety, efficacy, and bioavailability. While the models are set to launch later this month, Google has not clarified whether they will allow commercial use or customization.
LG has introduced EXAONE Deep, an advanced AI model designed for complex reasoning and real-world problem-solving. The flagship 32B model excels in mathematics, science, and programming while requiring far less computing power than larger models like Deepseek R1 and OpenAI's o1 mini. LG also offers lightweight versions 7.8B and 2.4B that can run locally with high efficiency.
TRENDING TOOLS
Aha > The world's first AI influencer marketing team.
Modernbanc > A modern accounting software that work for you.
Attention Insight Figma Plugin > Instantly understand user behavior to optimize designs.
HuggingSnap by HuggingFace > A real-time visual assistant for phone that describes text, objects and scenes instantly.
Unify > Notion for AI Observability.
THINK PIECES / BRAIN BOOST
AI is "tearing apart" companies, survey finds.
AI model history is being lost.
Digital hygiene.
Not all AI-assisted programming is vibe coding.
Meet the humans building AI scientists.
When will AI systems be able to carry out long projects independently?
VALLEY GEMS
1/ Emerging law for agentic systems; the length of tasks AI's can do is doubling every seven months.
When will AI systems be able to carry out long projects independently?
In new research, we find a kind of “Moore’s Law for AI agents”: the length of tasks that AIs can do is doubling about every 7 months.
— METR (@METR_Evals)
3:39 PM • Mar 19, 2025
2/ Nvidia showcases Blue, a cute little robot powered by the Newton physics engine. 2025 is such a huge starting year for robotics.
NVIDIA has a new robot and it looks like cgi except it’s real and the movements are so organic. Pretty cool actually.
— Ian Miles Cheong (@stillgray)
10:12 AM • Mar 19, 2025
3/ Food delivery is going to be cheaper than ever soon.
Autonomous robotic bikes are the sleeper format
— @jason (@Jason)
3:18 PM • Mar 19, 2025
4/ Claude MCP for Unity is here.
Built an MCP that lets Claude talk directly to Unity. It helps you create entire games from a single prompt!
Here’s a demo of me creating a “Mario clone” game with one prompt. 👇
— Justin P Barnett (@JustinPBarnett)
11:22 AM • Mar 18, 2025
SUNSET IN THE VALLEY
Thank you for reading today’s edition. That’s all for today’s issue.

💡 Help me get better and suggest new ideas at [email protected] or @heyBarsee
👍️ New reader? Subscribe here
Thanks for being here.
HOW WAS TODAY'S NEWSLETTER |
REACH 100K+ READERS
Acquire new customers and drive revenue by partnering with us
Sponsor AI Valley and reach over 100,000+ entrepreneurs, founders, software engineers, investors, etc.
If you’re interested in sponsoring us, email [email protected] with the subject “AI Valley Ads”.