In January 2026, reporting revealed that sensitive U.S. government documents were entered into the public version of ChatGPT by a senior official at the Cybersecurity and Infrastructure Security Agency (CISA). The files were marked “For Official Use Only” — not formally classified, but clearly not intended for public or external systems.

The incident was first reported by POLITICO , one of the most established political and policy news organisations covering U.S. government and security matters.

What makes this case relevant is not the individual involved. It is the timing of the discovery. The issue became visible only after the data had already been shared.

The CISA–ChatGPT Case: What Was Reported

According to public reporting, the documents contained internal government information labelled “For Official Use Only”. This label is widely used across U.S. government agencies to flag information that should not be disclosed outside approved environments.

Importantly, these documents were not classified. However, the label still signals sensitivity and restricted use. Sharing such material with public AI systems falls outside standard information-handling practices.

For background on the agency itself, see the official website of the Cybersecurity and Infrastructure Security Agency (CISA) .

When the Issue Became Visible

The key detail is simple. Security controls did not stop the data at the moment it was entered into ChatGPT.

Instead, internal monitoring and compliance processes identified the issue later. By that point, the information had already left its original environment.

This sequence matters. It shows how many organisations still rely on reactive detection when employees use AI tools in daily work.

This challenge is not limited to government environments. It affects banks, fintechs, regulated industries, and any organisation using public AI systems without a preventive layer.

How Organisations Usually Detect AI Data Leaks

Most organisations do not detect AI-related data leaks at the moment they happen. Security teams usually discover them later.

The process often follows a clear pattern.

  1. Users paste large amounts of text
    The content often comes from internal documents, PDFs, reports, or official templates.
  2. The text contains internal markers
    These may include labels such as “For Official Use Only”, reference numbers, or formal document structures.
  3. The user submits the content to a public AI service
    In many organisations, public AI tools are not approved for internal material.
  4. Security controls detect the issue later
    Endpoint tools or DLP systems raise alerts after the data has already left its original environment.

Why Traditional DLP Often Reacts Too Late

Data Loss Prevention plays an important role in enterprise security. It supports investigations and compliance requirements.

However, DLP usually reacts after the action. It does not stop users before they submit sensitive content to AI tools.

This delay increases risk, especially when teams use AI during daily operations.

The Hidden Risk: Incidents No One Reports

Some AI data leaks never trigger an alert. These incidents remain invisible for a long time.

  • AI tools rarely warn users about risky prompts
  • Service providers do not notify organisations
  • Teams often discover issues during audits or internal reviews

This gap explains why many organisations underestimate their real AI exposure.

Why Training Alone Does Not Prevent AI Data Leaks

Most organisations invest in awareness and training. They also define internal policies.

Still, mistakes happen. Users act quickly and under pressure.

At that moment, training alone does not help. Users need feedback before they click “Send”.

A Missing Control: Prechecks Before Sending

A practical safeguard checks content before it reaches an AI system. It runs directly where users interact with AI.

This approach does not replace DLP. It reduces risk earlier in the workflow.

You can learn how a local, user-side precheck works in practice on our How It Works page.

What This Means for Regulated Industries

Banks, fintechs, public institutions, and other regulated sectors face the same challenge. AI data risks are real and frequent.

Controls that only react after an incident do not prevent exposure. Preventive checks close that gap.

A Clear Lesson for Responsible AI Use

Most AI data leaks are accidental and discovered too late.

Responsible AI adoption requires controls that act before sensitive data leaves the organisation. Detection alone is no longer enough.