Key AI Launches This Week
AI Weekly Notes What Caught My Attention
This week felt like a reminder that the groundwork under AI is shifting again. The stories weren’t loud for the sake of being loud they pointed to where things are heading: out in orbit, inside ecosystems, and deeper into our workflows.
Google Wants to Move AI Compute Into Space
I kept coming back to this one. Google is pushing ahead with Project Suncatcher AI data-centres in orbit, powered solely by solar energy and running Google’s own TPU hardware. The details landed harder than I expected: TPUs already passed radiation testing, and two demo satellites are set for 2027.
What struck me most wasn’t the spectacle of containers in space, but the quiet signal: AI scale is hitting energy ceilings on Earth. If compute is heading skyward, then the software we build today needs to stop assuming infinite electricity or endless GPUs. Even for our projects, it’s a nudge to stay sharp about model size, caching, and efficient planning now rather than later.
Rakuten Turns Its Whole Ecosystem Into an AI Agent
Rakuten released its full-scale “Rakuten AI,” and what grabbed me wasn’t the app itself it’s that the company is placing the agent at the center of everything: messages, shopping, lifestyle tasks, and soon their core e-commerce marketplace.
It accepts text, voice-to-text, and image queries. But more importantly, it can carry out multi-step tasks across the Rakuten ecosystem based on what the user is trying to do. This is the sort of architecture where the UI becomes optional the agent handles intent, and the services fall in line.
There’s something here for our own systems: the future feels less like static modules and more like “what action does the user want to take across the entire product?”
LangChain Adds Secure Remote Sandboxes for Agent Code Execution
LangChain rolled out something developers have been waiting for: remote sandboxed environments for DeepAgents. Instead of letting agent-generated code run on your laptop, these sandboxes (via Runloop, Daytona, and Modal) isolate everything files, commands, execution context.
It solves real friction: dependency clashes, unsafe commands, long-running jobs that hog local CPUs, and the need for disposable environments. Now agents can spin up clean spaces, run tasks, inspect outputs, and shut them down afterward without touching your machine.
For anyone building agent workflows including us this lowers the barrier to letting agents handle heavier or riskier tasks.
Google Photos Rolls Out Nano Banana AI Editing on iOS
Google expanded its Magic Editor features to iOS using on-device Gemini processing. The privacy angle is clear: edits are generated on the phone instead of going to cloud servers. They also shipped notification summaries and new AI-powered finance search features. It’s subtle but marks another step toward AI tools running locally instead of relying on external compute.
Baidu’s ERNIE-4.5-VL-28B-A3B-Thinking Tops Vision Benchmarks
Baidu pushed out a model that reportedly outperforms major U.S. labs in video and image reasoning. The interesting part is how tightly they are weaving it into their search and voice products making their search experience more “assistant-like” while competing directly with DeepSeek’s momentum in China.
GitHub Copilot Switches to Claude Haiku 4.5 Free for All Developers
GitHub swapped out Sonnet 3.5 and moved to Claude Haiku 4.5 for Copilot’s backend. Haiku now supports code execution and reports a noticeable speed lift for millions of developers using it daily.
The big shift: Copilot is now free, removing one of the last barriers for developers entering AI-assisted coding.