New optional security feature restricts web access and connected app actions for high-risk users, while ‘Elevated Risk’ labels warn when certain AI capabilities may expose sensitive data to greater security vulnerabilities.
OpenAI has introduced a new optional security setting called “Lockdown Mode” in ChatGPT, aimed at protecting users from prompt injection attacks and potential data leaks as AI systems become more connected to external networks and applications.
Prompt injection attacks occur when malicious instructions are embedded into content accessed by an AI system, potentially causing it to reveal sensitive information or take unintended actions. OpenAI said the risk has increased as AI tools gain broader web access and integration with third-party services.
Restricted access for high-risk users
Lockdown Mode is designed primarily for users who may face elevated security threats, such as corporate executives and security teams. It is available across enterprise-focused offerings, including ChatGPT Enterprise and sector-specific versions. Workspace administrators can activate the feature for selected users through role-based controls.
When enabled, Lockdown Mode limits ChatGPT’s interaction with external systems. Web browsing is confined to cached content, preventing live internet requests. Several features are disabled, including image outputs, Deep Research, Agent Mode, and automated file downloads for data analysis. Users can still work with files they upload manually.
The feature does not block malicious prompts from appearing in content, but it prevents outbound network requests that could transmit sensitive data. Memory settings, conversation sharing, and file uploads remain unaffected, subject to administrative controls.
App controls and risk labelling
OpenAI has also outlined security guidance for connected applications. While Lockdown Mode does not automatically disable third-party apps, administrators retain control over which tools users can access. The company categorises app actions based on risk, warning that write actions carry higher exposure due to visible outcomes.
In addition, OpenAI is introducing “Elevated Risk” labels within ChatGPT and related tools to flag features that may involve additional security considerations — such as granting network access for developer workflows.
The company said Lockdown Mode will expand to consumer users in the coming months, and risk labels will be updated or removed as security safeguards improve.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



