This article explains how to maintain secure access to AI apps in your organization.
The recent explosion in the use of AI-based tools for a multitude of purposes presents new and unique security challenges and risks to your organization. For example, users can enter proprietary information into the free tier version of an AI app, and now the app vendor has the right to use this information. To protect your organization and its sensitive information, it's important to implement recommended best practices for the safe use of AI tools. Cato offers various features and capabilities that can help control access and protect against vulnerabilities posed by AI apps, while also meeting the needs of your users. This article contains examples of how to securely use AI tools in your organization.
The Internet Firewall and the CASB Application Control policies provide ways to secure AI app traffic by leveraging the Cato Cloud's robust app identification capabilities. You can define Internet Firewall rules to control access to a predefined category of AI apps, as well as set rules for specific AI apps. In addition, define the Application Control policy to make sure that users only access your enterprise tenant for an AI app, keeping your proprietary information safe.
The ability to identify apps in traffic flows has become crucial for modern networking and cybersecurity. Security vendors often resort to manually classifying applications by teams of engineers, however this method doesn't scale well and suffers from limited accuracy. Cato developed a groundbreaking approach driven by data science that can automatically identify applications, including new AI apps.
The identification process uses machine learning algorithms trained on a repository of billions of traffic flows that cross the Cato private backbone. The algorithms learn to identify applications based on thousands of data points derived from characteristics in traffic flows related to the app, which results in a high accuracy identification. When Cato confirms that it can accurately identify an app, the app is added as an entity in the Cato Management Application, and can be used in network and security rules, including for the Internet Firewall and Application Control policies.
You can easily define Internet Firewall rules to manage access to AI apps. Cato maintains a system category for Generative AI Tools that lets you control access in a single rule for the most popular AI apps, including: ChatGPT, AgentGPT, Google Bard, Elicit AI, MagicPen AI, Poe AI, OpenAI, and more. For example, you can define a rule that blocks all traffic to the Generative AI Tools category. An advantage of using the category is that when Cato adds new apps to it, the rule will automatically apply to the new apps. You can use the App Catalog to find more information about the apps included in this category.
For more granular access control, you can also define rules for specific AI applications. For example, after creating a rule that blocks traffic to the Generative AI Tools category, you can create a rule with higher priority that allows traffic to ChatGPT for a specific group of users that need access.
The following example Internet Firewall rules allow the User Group Research Team to access ChatGPT, while blocking all other access to the Generative AI Tools category:
To prevent the exposure of proprietary information in the free tier of an app, you can create rules in the Application Control policy that block users from accessing private accounts, and allow access to only your enterprise tenant. For example, you can define rules for the OpenAI app that only allow login activity to your organization's tenant, and block all other logins (for example, logging in with a private email address).
Below is a sample rulebase where the first rule allows login to OpenAI for user names that include the company domain, then the next rules block all login to OpenAI through direct and third-party authentication.
For more about configuring Application Control rules, see Managing the Application Control Policy.