New Delhi, September 16, 2025 – In a whirlwind of chiffon drapes and golden-hour glows, Instagram has been swept up by the “Nano Banana AI Saree” trend, where users transform everyday selfies into retro Bollywood portraits featuring elegant traditional sarees. Powered by Google’s Gemini app, the craze has millions of women – and curious others – donning virtual ethnic wear straight out of a 1990s film set. But amid the nostalgic glamour, a growing chorus of voices is raising alarms: Is this digital daydream safe, or a gateway to deeper privacy risks?
The trend exploded earlier this month, evolving from Gemini’s earlier “Nano Banana” fad – a quirky tool that turns photos into glossy 3D figurines with oversized eyes and cartoonish proportions. Named after the lightweight Gemini Nano AI model integrated into the app, the “Banana” moniker stuck as users began experimenting with prompts to create hyper-stylized images. By mid-September, Google reported over 500 million images generated via the tool, with the saree edits dominating feeds in India and beyond.
At its core, the process is deceptively simple. Users log into the Gemini app, upload a clear selfie, and input a tailored prompt – often sourced from viral lists on social media. Popular ones include: “Transform the subject into a classic Bollywood heroine in a flowing red chiffon saree, hair styled in soft waves, background warm-toned with glowing sunset light for a romantic, dramatic mood.” Seconds later, the AI delivers a grainy, poster-style portrait: flowing fabrics, intricate patterns like polka dots or florals, and cinematic lighting that evokes icons like Madhuri Dixit or Sridevi.
“It’s like stepping into a Raj Kapoor film,” gushed one Instagram user in a reel that racked up over 1 million views. The aesthetic – chiffon sarees billowing in the wind, vintage textures, and warm amber tones – taps into a deep cultural nostalgia, blending modern tech with timeless Indian fashion. Celebrities and influencers have amplified the buzz, with Shantanu Naidu, aide to Ratan Tata, humorously quipping in a viral video that “people already own sarees – why AI?” The clip alone garnered more than 1 million views.
Yet, the fun has a darker side. Reports of “creepy” glitches are proliferating on platforms like X and Instagram. In one widely shared video with nearly 7 million views, user Jhalak Bhawnani recounted uploading a photo of herself in a green suit, only for the AI to generate an image with an unexpected mole on her hand – a real feature absent from the original snap. “It’s very scary,” she captioned, sparking hundreds of comments from others sharing similar anomalies: extra fingers, floating hands, or scars that weren’t there. Some speculated the AI was pulling from linked Google data, like Photos or search history, to “enhance” details unnervingly accurately.
These incidents have fueled broader safety concerns. IPS officer VC Sajjanar, a prominent voice on cybercrime, issued a stark warning on X: “Beware of this trend – it may be a trap.” He highlighted risks of data harvesting, where uploaded images could be reverse-engineered or misused for deepfakes. Experts echo this, noting that AI tools like Nano Banana process facial data on Google’s servers, potentially retaining metadata that reveals device info, location, or even browsing habits. “Your original photo can be reconstructed from the output,” warned one cybersecurity analyst in a post viewed thousands of times.
The potential for harm extends further. In a country where gender-based cyber harassment is rampant – with over 50,000 cases reported annually, per National Crime Records Bureau data – altered images could fuel non-consensual edits or scams. One X user lamented, “AI can add a saree today; tomorrow, it could remove clothes entirely.” Others pointed to trademark risks, with the tool accurately replicating brand logos from toys like Bandai’s Nendoroid, raising intellectual property red flags.
Google, for its part, insists on safeguards. Images carry invisible SynthID watermarks and metadata tags identifying them as AI-generated, per the company’s AI Studio guidelines. The firm emphasizes that user data isn’t used to train models without consent, and the app complies with global privacy laws like GDPR and India’s DPDP Act. “Nano Banana is designed for creativity, with built-in protections,” a spokesperson told reporters. Still, critics argue these measures fall short against sophisticated bad actors.
| Pros of the Nano Banana Saree Trend | Cons and Risks |
|—————————————–|———————|
| Quick, accessible creativity – no editing skills needed. | Privacy leaks via metadata and facial recognition data. |
| Celebrates cultural heritage through nostalgic visuals. | Potential for deepfake misuse or non-consensual alterations. |
| Over 500M images created, boosting Gemini’s 10M+ downloads. | “Creepy” glitches like added personal details (e.g., moles, scars). |
| Free tool with viral prompts shared widely. | Data retention on servers; reconstruction of originals possible. |
As the trend shows no signs of slowing – with fresh posts flooding Instagram daily – users are urged to tread cautiously. Opt for anonymized uploads, review privacy settings, and avoid sharing sensitive images. For those tempted, start with non-personal photos. In the end, while AI promises to drape us in digital elegance, it reminds us: True style comes with strings attached. What was meant to revive the past might just haunt the future.





































