An incident involving the acting head of the U.S. Cybersecurity and Infrastructure Security Agency (CISA) has intensified concerns in Washington over the use of commercial AI tools for handling sensitive government information. Last summer, Madhu Gottumukkala, who was appointed acting CISA director by President Trump, reportedly uploaded government documents marked “For Official Use Only” into the public version of ChatGPT, triggering automated security alerts and an internal review by the Department of Homeland Security (DHS).
While the documents were not classified and Gottumukkala was reportedly authorized to access and use AI tools, the episode exposed a deeper institutional dilemma. Government agencies are increasingly experimenting with generative AI to boost productivity, yet clear boundaries around data sensitivity, model training, and external data exposure remain underdeveloped.
Cybersecurity experts warn that even non-classified material can carry operational, procedural, or contextual risks if shared with commercial AI platforms that lack sovereign controls. Public AI systems may retain metadata, logs, or contextual traces that could be exploited, raising questions about compliance, auditability, and long-term data governance.
The incident has reignited calls for stricter AI usage policies across federal agencies, including clearer definitions of permissible data, dedicated government-grade AI systems, and stronger safeguards. As AI adoption accelerates, the challenge for policymakers is balancing innovation with the core mandate of national security and public trust.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



