AI News of the Week (28th February, 2026)
TLDR: Google launches Nano Banana 2, Perplexity introduces "Computer", and OpenAI forms Frontier Alliances. This week also sees Samsung integrating Perplexity into the Galaxy S26, Figma adding OpenAI Codex, Anthropic launching Claude Code Remote Control, and NVIDIA revealing the Vera Rubin AI system.
Google Launches Nano Banana 2 as Default Image Generator Across All Products
On February 26, Google DeepMind launched Nano Banana 2 (officially Gemini 3.1 Flash Image), a new image generation model that brings the advanced reasoning and quality of Nano Banana Pro down to Flash-level speed and pricing. The model becomes the default for image generation across the Gemini app's Fast, Thinking, and Pro modes, as well as Google Search (via Lens and AI Mode in 141 countries), the Flow video editing tool, Vertex AI, and the Gemini API. Key capabilities include resolutions from 512px to 4K, subject consistency across up to five characters, object fidelity for up to 14 elements per workflow, and real-time web image search as a grounding context tool. For developers, the model is available immediately via the Gemini API and Google AI Studio with a paid API key. All generated images carry a SynthID watermark and are interoperable with C2PA Content Credentials — Google notes the SynthID verification feature has been used over 20 million times since its November launch.
Perplexity Launches "Computer": A Multi-Model Digital Worker That Runs Entire Workflows
On February 25, Perplexity AI launched Perplexity Computer, repositioning itself from a search engine into a general-purpose digital worker platform. The system orchestrates 19 frontier AI models — using Claude Opus 4.6 as the core reasoning engine, Gemini for deep research, Nano Banana for image generation, Veo 3.1 for video, and GPT-5.2 for long-context recall — and assigns each task to the most suitable model automatically. Users describe a desired outcome, and Computer decomposes it into subtasks handled by sub-agents with real browser access, a real filesystem, and 400+ app integrations, capable of running asynchronously for hours or months. The launch is positioned as a more controllable alternative to local agent tools like OpenClaw: Computer operates entirely within Perplexity's cloud, with spending caps per project. Access is currently limited to Perplexity Max subscribers at $200/month (10,000 credits/month), with Pro and Enterprise rollout to follow load testing.
OpenAI Forms "Frontier Alliances" with BCG, McKinsey, Accenture, and Capgemini
On February 23, OpenAI announced multi-year Frontier Alliances with Boston Consulting Group, McKinsey & Company, Accenture, and Capgemini to accelerate enterprise adoption of its Frontier AI agent platform. OpenAI framed the initiative around a specific diagnosis: "The limiting factor for seeing value from AI in enterprises isn't model intelligence, it's how agents are built and run in their organizations." BCG and McKinsey take a strategy and operating model role, helping leadership teams redesign workflows and embed AI at scale; Accenture and Capgemini handle end-to-end technical implementation, connecting Frontier to enterprise data architectures and existing systems. Each partner is building dedicated practice groups certified on OpenAI technology, while OpenAI's Forward Deployed Engineering team works directly alongside them in client engagements. Frontier is described as a "semantic layer for the enterprise" that lets AI agents navigate CRM, HR, and internal ticketing tools. The platform is available to a limited set of customers, with broader availability expected over the coming months.
Samsung Adds 'Hey Plex' Perplexity Hotword to Galaxy S26
On February 22, Samsung announced that the Galaxy S26 series will ship with Perplexity integrated at the system OS level — the first time Samsung has given system-level access to an app not made by itself or Google. Users can invoke Perplexity via a dedicated "Hey Plex" wake phrase or by pressing and holding the side button, placing it alongside "Hey Google" and "Hey Bixby" as a third parallel hotword on the device. The integration is deeply embedded across core Samsung apps including Notes, Calendar, Gallery, Clock, and Reminders, as well as select third-party apps, enabling multi-step workflows — such as researching a destination, saving a note, setting a reminder, and adding a calendar event — all in a single continuous interaction. Samsung's revamped Bixby also routes complex, web-based, and generative queries through Perplexity's APIs in the background, while handling on-device actions itself. Samsung cited internal data that nearly 8 in 10 users already rely on more than two types of AI agents as the rationale for building an open, multi-agent ecosystem rather than locking users into a single assistant. The integration is expected to expand to other Galaxy devices, with details on supported hardware to follow.
Figma Integrates OpenAI Codex to Unify Design and Code Workflows
Figma announced integration with OpenAI's Codex this week, enabling teams to move fluidly between visual design and coding environments within a single tool. Through Figma's Model Context Protocol (MCP) server, designers can iterate on assets directly inside code workflows, and engineers can refine code-generated visuals without switching platforms. The integration arrives as Codex adoption has surged significantly, with rapid uptake of its macOS app and high weekly active usage. For product and marketing teams, the practical implication is a shortening of production cycles: landing pages, microsites, and product UI can iterate faster when the handoff between design intent and implementation is automated rather than manual.
Anthropic Launches Claude Code Remote Control: Control Your Terminal from Your Phone
On February 24, Anthropic announced Claude Code Remote Control, a new feature that lets developers start a coding session in their local terminal and continue it from a smartphone, tablet, or any web browser — while all execution remains on their own machine. The feature is a synchronization layer, not a cloud migration: the local Claude Code session makes only outbound HTTPS requests, opens no inbound ports, and keeps full access to the user's filesystem, MCP servers, tools, and project configuration. Once started with the command claude remote-control (or /rc in-session), a session URL and QR code are generated; users can open the URL in any browser, scan it with the Claude mobile app, or find the session at claude.ai/code. Traffic is routed through the Anthropic API over TLS using multiple short-lived credentials scoped to a single purpose, and sessions reconnect automatically after brief network drops. The launch coincides with a remarkable growth milestone: Claude Code has reached a $2.5 billion annualized run rate as of February 2026 — more than doubling since the start of the year — with 29 million daily installs in Visual Studio Code alone. Remote Control is currently rolling out as a Research Preview for Claude Max subscribers ($100–$200/month), with Pro plan access expected shortly; it is not yet available on Team or Enterprise plans. Current research preview limitations include one remote session at a time, a requirement that the terminal remain open, and a roughly 10-minute network timeout.
NVIDIA Gives First Look at Vera Rubin: 10x More Efficient Than Blackwell, Shipping H2 2026
CNBC received an exclusive first look at NVIDIA's Vera Rubin AI system, the successor to Grace Blackwell, ahead of its expected H2 2026 ship date. The system pairs two Rubin GPUs with one Vera CPU across 17,000 components and is NVIDIA's first platform to be fully liquid cooled — a move the company says dramatically reduces data-centre water consumption compared to traditional evaporative cooling. Analysts estimate Vera Rubin rack pricing at $3.5–4 million, roughly 25% above Grace Blackwell. The system is designed to be approximately 10 times more computationally efficient than its predecessor, with the Rubin architecture targeting the inference demands of trillion-parameter models and agentic workloads. Meta has already announced plans to deploy Vera Rubin in its data centres by 2027.