Skip to main content

New eBook: Security Service Edge (SSE) for dummies. Click here to download the eBook now.

Introduction

The emergence of artificial intelligence (AI)-based SaaS applications has brought about a transformative shift in how corporate users engage with their daily work. Generative AI applications, such as ChatGPT, have opened countless possibilities for organisations and their employees to enhance business productivity, simplify tasks, improve services, and streamline operations. With ChatGPT, teams and individuals can conveniently leverage its capabilities to generate content, translate text, process data, build financial plans, and debug and write code, among other uses. Generative AI applications also create enormous and unprecedented data security risks.

The Data Security Challenge

While AI applications have the potential to improve work efficiency, they also introduce new risks and expose sensitive data to external threats. Organisations need to address these challenges to ensure the confidentiality, integrity, and security of their data.Here are some examples of how sensitive data can be exposed to ChatGPT and other cloud-based AI applications:

• Text containing Personally Identifiable Information (PII) can be posted and therefore exposed on the chatbot to request email ideas, customer responses, personalised letters, or sentiment analysis.

• Confidential health information, including individualised treatment plans and medical imaging data, may be entered into the chatbot, potentially compromising patient privacy.

• Software developers may upload unreleased proprietary source code for debugging, code completion, and performance improvements.

• Software developers could even directly connect a corporate app, containing source code or a database, to generative AI apps via API. This cross-app data movement enables automatic synchronisation of information in the cloud and facilitates routine tasks such as refining code structure and improving readability. However, it is important to note that such access could potentially expose confidential data to an unsafe third party application.

• Files of confidential company documents, such as earnings report drafts, M&A documents, and pre-release announcements, might be uploaded for grammar and writing checks, negligently risking potential data leaks.

• Financial data, including corporate transactions, undisclosed revenue, credit card numbers, and customer credit ratings, can be processed by ChatGPT for financial planning, compliance, fraud detection, and customer onboarding, without any security measures.

• Within the marketing department, an employee could in the future integrate the complete customer database in Salesforce.com with a ChatGPT and other generative AI powered plug-ins and many other unsanctioned apps via an OAuth integration. This cross-app integration will empower the employee to leverage the capabilities of GPT, enabling them to potentially automate the process of composing emails to contacts whose contracts are nearing expiration. This is another example of cross-app data movement that cannot be detected by inline network solutions like firewalls and secure web gateways (SWG).

You have to see it to believe it

Join EveryCloud and Netskope to see a demonstration in full, the session will cover:

  • How to prevent sensitive data from being shared with ChatGPT

  • Get visibility into what AI applications users attempt to access, when and from where

  • Real-time coaching workflows to make users aware of their potential risky activities

New call-to-action