Log Exporter: Under the Hood

Exporting Events Using Cato's Log Exporter


Note: The Log Exporter feature will be End of Life in March 2024. For more information, see the relevant Release Note.

Cato Networks’ Log Exporter lets you export events to a remote storage and integrate them with your SIEM or BI systems. In the Cato Management Application, you can select the event types that you want to export: Audit Trail, Health (Connectivity), Security and System and download a client script that allows you to download the logs from the remote storage.

For more about configuring the Log Exporter, see Exporting Log Files.

High Level Overview of Exporting Logs

  1. In the Cato Management Application, select the event types to export.
  2. When the Log Exporter is enabled, it sends the event logs to an S3 bucket in Amazon AWS.
  3. You can download the stored events from this storage using Cato’s client script
  4. Integrate these events into your SIEM system.

Explaining the Log Exporter

Use the Cato Management Application to enable and configure the Log Exporter. Select the log format (CEF or JSON) and the event types to export. Every 30 seconds, the Log Exporter exports the new events and sends them to the S3 bucket. (There is no limit for the S3 bucket file size). A client script is available for download in the Cato Management Application. Use the script to download the exported files from the AWS S3 bucket.

Downloading Logs with a Client Script

The script connects to the Amazon S3 bucket for your account, downloads, and unzips the log files. Before you run the script, you must install Curl and Unzip tools on the host that runs the script. The script contains an access key token that is required to download the logs from the S3 bucket. The access key is necessary to access your AWS S3 bucket and it authenticates you to the logs for your account.

Changing the Access Key

In the case that you are rotating access keys on a regular schedule, Cato lets you change the access key in the Cato Management Application. When you generate a new key, you must update the script with the latest key. You can either manually edit the script file with the new key or download a new script. When you download the script after changing the access token, the new script contains the updated access key. If you don’t update the script with the latest key, you can’t access the AWS account and the script fails to download the logs.

Running the Client Script

After you download the script from the Cato Management Application, we recommend that you run the script with an output file name, so the event logs are saved to a file. If you run the script without adding an output filename, the script outputs the data to the screen.

The following example shows an executable command that outputs to a file:

catonetworksSyslogClient.sh > CatoSyslogExportFile.log


  • You must assign executable permissions for the script before running it
  • Modifying the script isn't support

The script supports up to 5 retries for downloading the log files. If for any reason the script fails to download the logs in the first 5 attempts, the script stops running.

Downloading the Log Files

The client script connects to the S3 bucket and determines what is the first available log file to download. The script updates the storage file with the log entry value that indicates the last log file downloaded.

The following image shows an example of the running directory with a storage file:


Inside the storage file there are 2 numbers:

  1. The entry value – the number of the last log file downloaded
  2. The number of download attempts

The following image shows an example of a storage file:


In this example, the last time the script was run, it downloaded the log file with the entry value: 7007. The 1 indicates that the script succeeded to download the logs in the first attempt.

Note: If you want to start over and download all the logs for your account in the S3 bucket, delete the storage file and then run the script.

Log Retention Policy

The Log Exporter keeps the log files in the S3 bucket in Amazon for the last 7 days. Files and data that are older than 7 days are deleted. 

Was this article helpful?


Add your comment