If you hung out on LinkedIn last week, you may have seen some chatter about Google’s new “AI Mode” that was just released to U.S. users.
What got less attention, is the fact that during their annual I/O developer conference, they simultaneously released a combination of dozens of other AI tools and features. And when I say dozens, I don’t mean it figuratively.

I spent some time over the weekend digging through the announcements and I even tested a few of the new tools. I’ll share my favorite six with you in just a moment. Afterwards I’ll briefly highlight some others. Then we’ll finish up with my take on what to make of this unprecedented release schedule.
Top 6 AI tools from the Google I/O conference
AI Mode

Think of AI Mode as Google Search getting a complete personality makeover. Instead of just giving you links, it has conversations with you about complex topics, using Google’s Gemini 2.0 to provide thorough, AI-powered responses with sources.
If you enjoy using Perplexity AI, then you’ll feel right at home with Google AI Mode – it’s essentially Perplexity but with a Google wrapper.
You can ask multi-part questions like “What’s the difference between smart rings, smartwatches, and sleep tracking mats?” and get a comprehensive breakdown that would normally take several separate searches. The tool can even follow up on your questions, making research feel more like chatting with a knowledgeable friend than hunting through search results.
Key points 🗝️
- Available Now: ✅
- Where: US only (expanding soon)
- Free: ✅
SynthID

SynthID works by embedding invisible watermarks directly into AI-generated text, images, videos, and audio content that humans can’t detect but specialized software can spot.
With the new SynthID Detector portal, you can upload content and it’ll actually tell you if it contains watermarks, even highlighting specific parts that are most likely AI-generated.
Google has already watermarked over 10 billion pieces of content 1 and made the text watermarking version open source so other companies can use it. They’re also partnering with NVIDIA and others to expand the watermarking ecosystem beyond just Google’s tools.
Key points 🗝️
- Available Now: ❌
- Wait List: For testers
- Free: ✅
Google Flow

Flow is Google’s answer to OpenAI’s Sora. It fuses their Veo video model with their Imagen image generator and sprinkles a little Gemini text processing on top. The result is a single toolkit that can produce videos from prompts.
You can describe scenes in plain English, import your own assets, or generate everything from scratch, then use tools like camera controls and scene builders to craft cinematic clips and stories.
The platform evolved from their earlier VideoFX experiment and includes neat features like asset management and “Flow TV” where you can see what prompts other creators used.
Key points 🗝️
- Available Now: ✅
- Where: 70+ countries
- Free: ❌ (starting from $20 / month)
Project Astra

Project Astra represents Google’s ambition to build the ultimate AI assistant. In contrast to basic chatbots, Astra can use your phone’s camera to understand your surroundings, remember past conversations, and even control apps on your Android device to complete tasks.
Imagine asking it to help fix your bike, and it automatically finds the manual, opens a YouTube tutorial, and contacts bike shops about parts availability – all while you’re doing other things.
The technology is being integrated into
and is designed to work on phones and eventually smart glasses.Key points 🗝️
- Available Now: ❌
- Where: Limited testers only
- Wait List: Open
Jules

Jules is Google’s take on an autonomous coding assistant that gets stuff done while you focus on the fun parts of programming.
Unlike typical coding assistants that just suggest completions, Jules can tackle entire tasks like fixing bugs, writing tests, or implementing features by cloning your GitHub repository into a secure cloud environment.
You can assign it a task and walk away while it creates a plan, makes changes, and prepares a pull request for your review. It’s language agnostic, but works best with projects that use JavaScript/TypeScript, Python, Go, Java, and Rust.
Key points 🗝️
- Available Now: ✅ (public beta)
- Where: Global (wherever the Gemini model is available)
- Free: ✅ (with limits)
- Languages: JS, Python, Go, Java, Rust
Google Beam

Google Beam is the evolution of Project Starline, Google’s attempt to make video calls feel like you’re actually sitting across from someone in the same room.
Using six cameras, custom light field displays, and AI processing, Beam creates 3D representations of people on calls without requiring VR headsets or glasses.
The technology combines computer vision, machine learning, and real-time compression to achieve what Google calls “near-perfect” millimeter-level head tracking at 60 frames per second. While pricing hasn’t been announced, the complexity of the hardware suggests this won’t be showing up in home offices anytime soon – think more corporate conference rooms than kitchen table video calls.
Key points 🗝️
- Available Now: ❌ (late 2025)
- Where: Enterprise customers first
- Cost: TBD (probably expensive)
And that’s not even close to everything
You might think that six major new tool releases would already be a lot, but then you wouldn’t be Google. As cool as they are, those six tools barely scratch the surface of the 100 things that Google announced at the I/O conference last week.
And when I say one hundred, I’m not exaggerating. Google published a blog post about it and the title of the post is literally “100 things we announced at I/O.” 2
Here are a few of the other major announcements:
- Real-time speech translation in Google Meet that preserves your voice tone and expressions
- Android XR smart glasses with Samsung partnership and Gemini integration
- NotebookLM Video Overviews that turn documents into narrated video summaries
- Three new Gemini model variants: 2.5 Flash, Gemma 3n, and specialized models for medicine (MedGemma) and sign language (SignGemma)
- Fully agentic Google Colab that fixes code errors automatically
- Multiple developer tools: Stitch for UI generation, Journeys for app testing, Version Upgrade Agent for dependency updates
- Lyria 2 music generation with real-time composition capabilities
- Firebase Studio improvements with Figma import support
- Project Mariner is a browser agent that navigates websites and completes complex tasks for you
If your head is spinning trying to keep track of all this, you’re not alone.
The overlap problem
I’m not an expert on the inner workings of Google. Nor do I have a flowchart that shows Google’s various divisions that ultimately connect up to CEO Sundar Pichai.
But from an outsider’s perspective, it feels like the company is a bunch of loosely connected silos that have the Google brand name slapped onto them but in practice are doing their own thing. Then, whenever they “build something cool,” they send it up the pipeline and the Google C-Suite tries to make it fit into the company vision somehow.
Again, I’m not saying this is how it actually works. This is just how it comes across to me, as an outside observer.
Why do I say that?
Because many of the tools overlap with each other and it comes across as if Google is just playing a game of product dart throwing to see which darts will stick.
Take coding assistants, for example.
Jules is Google’s new autonomous coding agent that I just talked about earlier. But! They also have Gemini Code Assist, Google Colab, AI Studio, and Firebase for building apps.
How many different ways do we need to ask AI to write code for us?
Or consider the search experience confusion: the new AI Mode lets you chat with Google while browsing. But don’t mix that up with Gemini in Chrome, which also lets you ask AI questions while browsing. And definitely don’t confuse either of those with Search Live (part of Project Astra), which lets you chat with Search about what your phone camera sees.
If you can make sense of all that, you deserve a trophy from Mr. Pichai.
The bigger picture
What makes this whole situation even more wild is that Google is currently facing a major antitrust lawsuit from the U.S. government. 3 Uncle Sam’s argument is that Google has become too dominant over web search, is stifling competition, and needs to be broken up.
It makes you wonder if this volcano eruption of AI tools is some kind of unconventional legal tactic to prove that they’re not focused on any one thing. It’s a fascinating contradiction when you think about it. They have been accused of being too focused on search dominance so they respond by appearing completely unfocused in their product strategy.
Don’t get me wrong – the individual tools are genuinely impressive. Real-time speech translation that preserves your voice? AI that can autonomously fix your code? Video calls that make people appear three-dimensional? Any one of these would have been headline news just a few years ago.
But when you release everything at once in a tsunami of announcements, even breakthrough technology gets lost in the noise.
Maybe this scatter-shot approach will work for Google, though. Maybe having teams work independently and launch rapidly is exactly how you stay ahead in the AI race. Well, as the old saying goes: “only time will tell.”
Have you been following along with Google’s latest AI releases? Which tool are you most excited to try? Which of the ones I mentioned here are new to you? Let me know in the comments. I’d love to get your take.
…
Don’t forget to join our crash course on speeding up your WordPress site. Learn more below: