The tech giant says attackers used over 100,000 crafted prompts in a bid to reverse-engineer Gemini’s reasoning patterns, highlighting growing intellectual property risks as AI systems become central to global business and innovation strategies.
Google has revealed that its Gemini artificial intelligence chatbot was targeted in a large-scale attempt to replicate its underlying behaviour through what the company describes as a “model extraction” attack.
According to Google, the effort involved more than 100,000 carefully structured prompts designed to analyze how Gemini generates responses. Rather than attempting to steal source code, attackers sought to study output patterns and infer the internal reasoning processes that power the AI system. The activity was detected by Google’s internal security teams and linked to commercially motivated actors seeking to develop competing AI models.
Mapping AI behaviour through prompts
Google said the attackers repeatedly interacted with Gemini in ways intended to uncover how it interprets context, connects ideas, and solves problems. By collecting and analyzing large volumes of responses, the actors aimed to reconstruct elements of the model’s decision-making structure.
This approach leverages the open, interactive nature of large language models, which must generate answers in real time. Over time, patterns in those outputs can provide insights into how the system functions.
The company noted that the campaign appeared to originate from multiple global regions, though it declined to publicly identify those involved.
Security measures and industry implications
The suspicious activity was flagged by the Google Threat Intelligence Group, which monitors digital threats. Using behavioral analytics and anomaly detection tools, the team identified unusual prompt patterns consistent with extraction attempts. Google said it blocked the accounts involved and introduced additional safeguards to limit sensitive inferences from repeated queries.
Despite these defenses, experts warn that AI systems remain inherently exposed due to their interactive design. As organizations increasingly rely on AI models trained on proprietary data, the risk of intellectual property leakage grows.
The challenge extends beyond Google. OpenAI has previously raised concerns about similar distillation tactics in the broader AI sector.
Industry analysts say safeguarding model behavior — not just data — will be a critical focus as AI adoption accelerates worldwide.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



