US Cyber Chief’s ChatGPT Data Breach Sparks Security Debate
Madhu Gottumukkala, the Indian-origin chief of a key US cybersecurity agency, reportedly shared sensitive government files with the AI chatbot ChatGPT, raising major security concerns.
Key Details of the Incident
- Who: Madhu Gottumukkala, chief of the US Cyber Safety Review Board (CSRB).
- What: Shared sensitive files containing US government cybersecurity information with OpenAI’s ChatGPT.
- Why: Sought the AI’s help in preparing a presentation.
- Source: The incident was first reported by the Washington Post.
Security Fallout and Response
The shared files reportedly contained details about the US government’s cybersecurity posture. This has triggered a serious debate on the safety of using generative AI tools for confidential work.
Gottumukkala has publicly apologized, stating he had no intention to disclose sensitive data and was unaware of the files’ classified nature at the time.
Expert Opinions Divided
The breach has split cybersecurity opinions:
Risk Perspective: Some experts warn that AI chatbots cannot guarantee data protection, making them unsafe for handling sensitive information that could be accessed by unauthorized parties.
Safe-Use Perspective: Others argue that with proper security protocols, AI can be used cautiously. They advise users to only share information they are comfortable becoming public.
Broader Implications
This incident serves as a critical warning for both individuals and organizations. It underscores the need for extreme caution when sharing any sensitive data online and highlights the urgent requirement for clear, enforceable policies governing AI use in official and sensitive capacities.



