- AI Valley
- Posts
- Google Nano Banana 2 loading ...
Google Nano Banana 2 loading ...
PLUS: AI-powered micro-drones that kill mosquitoes
Together with
Howdy, it’s Barsee.
Happy Monday, AI family, and welcome to another AI Valley edition. This issue takes 5 minutes to read.
Today’s climb through the Valley reveals:
Early look at images generated by Nano Banana 2
AI-powered micro-drones that kill mosquitoes
We already crossed the Turing test and barely noticed
Plus trending AI tools, posts, and resources
Let’s dive into the Valley of AI…
WORK OS
As AI agents connect to enterprise systems through MCP, secure authorization is essential.
explains how to implement OAuth 2.1 with PKCE, scopes, user consent, and token revocation to give agents scoped, auditable access without relying on API keys.
Learn how to design production-ready MCP auth for enterprise-grade AI systems.
*This is sponsored
THROUGH THE VALLEY
The update will follow strong user adoption of the first Nano Banana inside the Gemini app and related Google products, but this version makes clear technical leaps. It can now handle precise coloring, detailed angle control, and text correction within images, all areas where Nano Banana 1 struggled. Leaked samples show major visual gains and smoother handling of complex composition tasks previously unsupported.
A standout change is its multi-step generation workflow: the model first plans the output, creates an image, analyzes it with built-in vision tools, detects flaws, and iterates before delivering the final version. This self-review loop (new to the series) improves precision for design and product imagery.
Though not public yet, Google has started quietly testing Nano Banana 2 inside Gemini, and early UI cards suggest a broader release is close. Internally codenamed GEMPIX 2, the system may still sit atop Imagen 4 or hybrid Gemini Flash tech, depending on variant. Its simultaneous visibility across experimental platforms like Whisk Labs mirrors the multi-platform rollout of the original model.
Why does it matter?
Nano Banana 2 signals Google’s quiet but steady progress in bringing agent-like intelligence to image generation. By combining planning, self-correction, and multimodal reasoning, it blurs the line between prompt-based creation and true iterative design. Rather than chasing photorealism alone, Google appears focused on control and reliability (qualities that could make AI imagery more production-ready across the entire Gemini ecosystem).
A US-based startup, Tornyol, is building an AI-powered micro-drone that hunts and kills mosquitoes. Using smartphone microphones, car-parking sonar sensors, and custom DSP algorithms, their prototype tracks a mosquito by sound and rams into it mid-air. The goal: cut the cost of mosquito control by 100× and make it cheap enough to deploy at scale.
The upcoming product pairs a swarm of micro-drones with a base station that patrols your yard around the clock. For $50 a month, it promises mosquito-free evenings within days of operation.
Their phased-array sonar sends ultrasonic pulses, listens for echoes with a grid of microphones, and identifies a mosquito by its wing-beat Doppler pattern. Each species has a unique acoustic signature. Once detected, a control algorithm locks on and guides the drone to intercept the target while avoiding walls.
Why does it matter?
Mosquito-borne diseases kill more than 700,000 people every year, and traditional spraying is slow, costly, and environmentally harmful. If Tornyol’s $50-a-month backyard drones scale to real-world performance, they could make precision mosquito eradication affordable worldwide, turning what started as a hobbyist experiment into a plausible weapon against malaria itself.
AI systems can now hold conversations, reason through complex problems, and outperform the brightest humans in some of our hardest intellectual contests. Yet daily life hasn’t transformed as much as expected. Most people still see AI as chatbots or smarter search engines, while under the surface, these systems are already edging toward scientific discovery and autonomous research.
Just a few years ago, AI could only perform tasks that took humans seconds. Now it tackles problems that take hours, and soon (perhaps within two years), it will handle projects that would take people weeks or months. The real unknown is what happens when it can complete work that would take centuries. Meanwhile, the cost per unit of intelligence is falling by around 40× per year, suggesting exponential accessibility ahead.
Researchers at OpenAI expect that by 2026, AI will make small discoveries independently, and by 2028, significant breakthroughs could become routine. Despite that, they predict that everyday life will still feel oddly familiar; society adapts slowly, even to technologies that transform everything around it.
AI’s biggest promise lies in practical impact: improving healthcare, accelerating materials and drug discovery, modeling climate systems, and expanding access to personalized education. As these benefits compound, the challenge is ensuring AI improves lives, not just productivity.
Why does it matter?
Humanity is entering an age where intelligence itself is scalable, a tool that can generate knowledge faster than we can comprehend it. Whether that leads to abundance or instability depends less on AI’s capability than on how we manage it. If society builds strong safeguards and equitable access, AI could mark not the end of human relevance, but the start of a world where discovery, learning, and creativity are shared more broadly than ever before.
TRENDING TOOLS
Parallel Search > An agent-optimized search API that builds context windows with high-signal, low-noise data from fresh crawls. It delivers faster responses, uses fewer tokens, and produces higher-quality outputs than traditional engines
Caddy > A universal voice control assistant that lets you command any app on your computer hands-free
Nessie > Turns your ChatGPT chats into structured, searchable notes so you can rediscover insights and connect ideas instantly
Recast > Replace any character in a video with one click. Upload your clip, add a photo of the new character, and instantly match their voice, language, and background to your scene.
THINK PIECES / BRAIN BOOST
THE VALLEY GEMS
What’s trending on social today:
THAT’S ALL FOR TODAY
Thank you for reading today’s edition. That’s all for today’s issue.

💡 Help me get better and suggest new ideas at [email protected] or @heyBarsee
👍️ New reader? Subscribe here
Thanks for being here.
HOW WAS TODAY'S NEWSLETTER |
REACH 100K+ READERS
Acquire new customers and drive revenue by partnering with us
Sponsor AI Valley and reach over 100,000+ entrepreneurs, founders, software engineers, investors, etc.
If you’re interested in sponsoring us, email [email protected] with the subject “AI Valley Ads”.





