• AI Valley
  • Posts
  • Runway launches Gen-3 Alpha Turbo

Runway launches Gen-3 Alpha Turbo

PLUS: Anthropic’s new Claude prompt caching

Together with

Howdy! It’s Barsee again.

Happy Friday, AI family, and welcome back to AI Valley 🐶

In today’s edition:

  • 🎥 Runway launches Gen-3 Alpha Turbo

  • 💸 Anthropic’s new Claude prompt caching will save developers a fortune

  • 🔍️ Plus trending AI tools and resources

Ready, set, go…

PARTNER 💫

Meet Hermes 3, the First Full-Parameter Fine-Tuning of Llama 3.1

Nous Research just launched Hermes 3, the first full-parameter fine-tuning of Meta's Llama 3.1 405B model, trained on Lambda's 1-Click Cluster.

Hermes 3 can run on a single node or quickly scale to a multi-node 1-Click Cluster for further fine-tuning using Lambda's scalable cluster infrastructure.

Try your prompts immediately on lambda.chat or use Lambda's Chat Completions API for free.

MAIN SCOOP

Runway launches Gen-3 Alpha Turbo 🎥

Runway ML has officially released Gen-3 Alpha Turbo, the latest version of the AI video generation model that it claims is 7x faster and half the cost of its predecessor, Gen-3 Alpha.

How fast is it?

The new "turbo" version of Runway’s Gen-3 AI model can generate a 10-second video from a single image in just 15 seconds, making video production almost real-time.

How does it compare to the base model?

While the base model handles dynamic motion well, it can sometimes cause distortions or unnatural visual shifts. Turbo, on the other hand, offers more stable and simple motion.

Here are some examples of the side-by-side comparison between the two models.

Anthropic’s new Claude prompt caching will save developers a fortune 💸

Anthropic has recently introduced a Prompt Caching feature with Claude, enhancing its ability to handle repetitive tasks involving large amounts of detailed contextual information.

How Does It Work?

This feature allows Claude to save and reuse prompts by leveraging cached data, leading to cost reductions of up to 90% and latency reductions of up to 85%.

How is it useful?

Given that Anthropic's models have a huge context window of 200,000 tokens, this feature is particularly valuable for working with complex tasks, such as long documents or extensive chat histories.

Why does it matter?

Developers can now cut their operational costs significantly by using cached prompts, which are priced much lower than base input tokens.

*Indicates our partner’s link

USEFUL AI LINKS

Trending Tools

  • Kerlig for macOS > Write replies on Slack or Mail, fix spelling, chat with documents, etc.

  • Sparkle > Create and organize your files automatically using AI.

  • Elevenstudios by ElevenLabs > Fully managed video and podcast dubbing.

  • Trellis > Converts unstructured data (like documents, calls, and emails) into structured formats for easy database analysis.

AI Guides / Findings

  • How I won $2,750 using JavaScript, AI, and a can of WD-40.

  • How to create an AI-powered Q&A system with your company’s data?

  • Why are AI-generated images so shiny/glossy?

  • How to use Gemini to find cheap flight tickets?

*Indicates our partner’s link

THE LATEST IN

Tech

  • Intel partners with Karma Automotive to develop software-defined car architecture.

  • Waymo to double down on winter testing robotaxis.

AI

DAILY DOSE OF CONTENTS

1/ Easiest way to create voice-to-voice assistant.

2/ A video call with an AI Agent.

THAT’S ALL FOR TODAY

Thanks for reading. Until next time!

💡 Help me get better and suggest new ideas at [email protected] or @heyBarsee

👍️ Like what you see? Subscribe here

REACH 90K+ READERS

Acquire new customers and drive revenue by partnering with us

Sponsor AI Valley and reach over 90,000+ entrepreneurs, founders, software engineers, investors, etc.

If you’re interested in sponsoring us, email [email protected] with the subject “AI Valley Ads”.

Help me improve AI Valley

Login or Subscribe to participate in polls.

We appreciate your continued support! We'll catch you in the next edition 👋

💚 Written and edited by Barsee, Jet, and Akash Thapa.