Google expanded Gemini’s capabilities on Thursday, embedding its native Nano Banana image‑generation models into the Personal Intelligence framework. The upgrade lets the chatbot produce images that reflect a user’s own data – emails, photos, calendar events and files stored in Drive – rather than relying solely on textual prompts.
Subscribers to Google’s Plus, Pro and Ultra plans in the United States will see the feature appear in the next few days. Free users are expected to gain access over the following weeks, though Google has not announced any pricing changes. The rollout deliberately skips European markets, a move the company attributes to potential regulatory friction under GDPR and the forthcoming AI Act.
Nano Banana, Google’s in‑house image‑generation family, now includes three versions. The original model, built on Gemini 2.5 Flash, handles basic conversational image creation. Nano Banana 2, introduced in February 2026 on Gemini 3.1 Flash, adds faster iteration and a Pro‑level feature set. The latest Nano Banana Pro, powered by Gemini 3 Pro, leverages the model’s full reasoning and real‑world knowledge to produce outputs that capture deeper prompt nuance.
What sets this integration apart is the use of personal context. Gemini’s Personal Intelligence, launched in January 2026, already pulls text, photos and videos from a range of Google services – Gmail, Calendar, Drive, Photos, YouTube, Search, Maps and more – when users opt in. Until now, that data informed only text‑based responses, such as summarising travel plans from a confirmation email or suggesting purchases based on past orders. With Nano Banana, the AI can now generate visual content that incorporates a user’s own images or reflects preferences gleaned from their digital footprint.
Google highlighted several use cases: a user could ask Gemini to create a family portrait that blends recent vacation photos with a stylised background, or request a marketing mock‑up that mirrors their brand’s colour palette drawn from previous design files. A “sources” button appears alongside each generated image, showing which pieces of personal data the model consulted. The transparency feature aims to address growing concerns about the provenance of AI‑generated media.
The move positions Google against competitors that have introduced image‑generation tools but lack comparable data breadth. OpenAI’s ChatGPT and Apple’s Intelligence suite rely on more limited user data, while Google’s ecosystem spans email, cloud storage, maps and video, giving it a unique personalization moat. The company also hinted at future on‑device generation for Pixel phones and Android devices, which would keep processing private and reduce latency.
Privacy advocates remain wary. Google assures that the system does not train on personal data, but it does process that information at inference time to tailor images. Critics argue that the distinction may be lost on average users who see an AI that appears to “know” details about their homes, families or recent trips. The exclusion of Europe from the launch suggests that Google anticipates tighter scrutiny under data‑protection laws.
For subscribers who enable the feature, the promise is clear: an AI assistant capable of producing visuals that feel personal rather than generic stock imagery. Those who prefer to keep their data siloed can simply decline the opt‑in, though they will miss out on the new creative possibilities. Google has not disclosed any additional costs beyond existing subscription tiers, indicating that the functionality is bundled into current plans.
Este artigo foi escrito com a assistência de IA.
News Factory SEO ajuda você a automatizar conteúdo de notícias para o seu site.