Cato Networks Knowledge Base

Running Cato API Requests and Passing Data to an Elasticsearch Server

  • Updated

We strongly recommend that before you start using the Cato API, please review the Support Policy for the Cato API.

Overview

This article shows examples of Python scripts that run Cato API calls to collect account data. The scripts parse the data into a more readable format and pass the data into an Elasticsearch server. Then, you can easily create dashboards and log analysis with a better visualization tool such as Kibana.

The following diagram shows the data flow that is described in the article:

mceclip0.png

The Python script runs API requests to collect the data from the Cato server. The script inserts the data into the Elasticsearch server. Users can create dashboards in Kibana and analyze the data that is gathered from the Elasticsearch.

How to Run the Cato API Requests and Pass the Data to an Elasticsearch Server

This section shows examples of Python scripts for the accountSnapshot and the accountMetrics API calls for the account. In this article, the scripts perform the following actions:

  1. Collect the data.
  2. Parse the response data.
  3. Send the data to the Elasticsearch server.

Handling the accountSnapshot Data

The accountSnapshot query collects near real-time data for the account. The following example shows the use of the following Python scripts:

  1. CatoApiVar.py – initializes variables such as API Key, account ID, domain name, elastic search login credentials and more.
  2. CatoSnapToDoc.py – parses the required fields from accountSnapshot data into easy-to-read fields and structure.
  3. CatoPostSnap.py – sends the parsed snapshot data to the Elasticsearch index.

Initializing Variables

This section shows a sample script (CatoAPIVar.py) that initializes variables. These are some examples of the required variables: 

  • Domain: the Cato API domain - "https://api.catonetworks.com"
  • Epoint: the full URL for the API server - <domain>+'/api/v1/graphql2'
  • API_KEY:  your Cato API key for authentication
  • ES_API_KEY, ES_USER, and ES_PASS – authentication variables for your Elasticsearch server
  • Account_id – your account ID from the Cato Management Application

The following Python script initializes the variables. In order to authenticate with the Elasticsearch account, you must add the credentials based on the authentication method you use. For username and password authentication, add the ES_USER and ES_PASS. If you use API key for authentication use the ES_API_KEY:

#! /usr/bin/python
import requests
import zipfile
import os
import logging
import logging.handlers
import json #Variables
domain = "https://api.catonetworks.com"
epoint = domain+'/api/v1/graphql2'
API_KEY = "YOUR API KEY"
ES_API_KEY = "YOUR ES API KEY"
ES_USER = "YOUR ES USER"
ES_PASS = "YOUR ES PASSWORD"
accountId = "YOUR ACCOUNT ID" # Log Configuration
my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)
handler = logging.handlers.SysLogHandler(address = '/dev/log')
my_logger.addHandler(handler) session = requests.Session() cookies = {
'__s': '-',
} headers = {
'Connection': 'keep-alive',
'Pragma': 'no-cache',
'Cache-Control': 'no-cache',
'accept': '*/*',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36',
'content-type': 'application/json',
'Origin': domain,
'Sec-Fetch-Site': 'same-origin',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Dest': 'empty',
'Referer': domain,
'Accept-Language': 'en-US,en;q=0.9,he;q=0.8',
'x-api-key':API_KEY
} myCookies = session.cookies

Parsing the accountSnapshot Fields

The following script is a sample Python script that parses the accountSnapshot object data from the json format to field strings and more readable text. For example, parsing the data > accountSnapshot > sites > info > name into Sitename. In the snaptoDoc function, the loop runs on all the sites in your account and parses the data to the same text format.

The command: es = Elasticsearch([{'host': 'localhost', 'port': 9200}],http_auth=(CatoApiVar.ES_USER, CatoApiVar.ES_PASS)) initiates the es object with the connection details for your Elasticsearch account. It uses the host machine, the port number and the authentication credentials from the previous step.

#! /usr/bin/python import requests import json import CatoApiVar import elasticsearch from elasticsearch import Elasticsearch #Getting Seesion variables like cookies my_logger = CatoApiVar.my_logger es = Elasticsearch([{'host': 'localhost', 'port': 9200}],http_auth=(CatoApiVar.ES_USER, CatoApiVar.ES_PASS)) #New JSON ts = {} def snapTodoc(parsedContent): for i in range (len(parsedContent['data']['accountSnapshot']['sites'])): ts.update({'account':parsedContent['data']['accountSnapshot']['id']}) ts.update({'@timestamp':parsedContent['data']['accountSnapshot']['timestamp']}) ts.update({'siteid':parsedContent['data']['accountSnapshot']['sites'][i]['id']}) ts.update({'sitename':parsedContent['data']['accountSnapshot']['sites'][i]['info']['name']}) ts.update({'metrics':parsedContent['data']['accountSnapshot']['sites'][i]['metrics']}) ts.update({'sitestatus':parsedContent['data']['accountSnapshot']['sites'][i]['status']}) ts.update({'description':parsedContent['data']['accountSnapshot']['sites'][i]['info']['description']}) ts.update({'type':parsedContent['data']['accountSnapshot']['sites'][i]['info']['connType']}) for x in range (len(parsedContent['data']['accountSnapshot']['sites'][i]['devices'])): ts.update({'deviceid': x}) ts.update({'device':parsedContent['data']['accountSnapshot']['sites'][i]['devices'][x]}) res = es.index(index="snapdocs", body=ts) my_logger.info('Snap Docs for account where added '+ res['result'])

Sending the Data to Elasticsearch

The following script is a sample script that shows how to send the parsed data from the previous step into the Elasticsearch account. The command res = es.index(index="accountsnapshot", body=doc) uses the index name AccountSnapshot in the Elasticsearch account.

#! /usr/bin/python
import elasticsearch
from datetime import date
from elasticsearch import Elasticsearch
import json , csv
import CatoGetAccountSnapPret
from datetime import datetime
import CatoApiVar
import CatosnapTodoc my_logger = CatoApiVar.my_logger
es = Elasticsearch([{'host': 'localhost', 'port': 9200}],api_key=CatoApiVar.ES_API_KEY)
doc = CatoGetAccountSnapPret.getAcountSnapshot(CatoApiVar.accountId)
doc = json.loads(doc) my_logger.info('Pulled account snapshot '+ str(doc)) timeseries = {
"@timestamp": datetime.now()
}
doc.update(timeseries) CatosnapTodoc.snapTodoc(doc)

Handling the AccountMetrics Data

The AccountMetrics query collects real-time and historical metrics, statics, and analytics for the account. The following examples shows how to use of the following Python scripts:

  1. CatoApiVar.py – initializes variables such as API Key, account ID, domain name, elastic search login credentials and more.
  2. CatotsToKv.py – parses the required fields from the AccountMetrics data into easy-to-read fields and structure.
  3. CatoPostmetrics.py – sends the parsed metrics data to the Elasticsearch index.

Initializing Variables

This section shows a sample script (CatoAPIVar.py) that initializes variables. These are examples of the required variables: 

  • Domain: the Cato API domain - "https://api.catonetworks.com
  • Epoint: the full URL for the API server - <domain>+'/api/v1/graphql2'
  • API_KEY:  your Cato API key for authentication
  • ES_API_KEY, ES_USER, and ES_PASS – authentication variables for your Elasticsearch server
  • Account_id – your account ID from the Cato Management Application

The following Python script initializes the variables. In order to authenticate with the Elasticsearch account, you must add the credential based on the authentication method you use. If you use user and password for authentication, add the ES_USER and ES_PASS. If you use API key for authentication use the ES_API_KEY:

#! /usr/bin/python
import requests
import zipfile
import os
import logging
import logging.handlers
import json #Variables
domain = "https://api.catonetworks.com"
epoint = domain+'/api/v1/graphql2'
API_KEY = "YOUR API KEY"
ES_API_KEY = "YOUR ES API KEY"
ES_USER = "YOUR ES USER"
ES_PASS = "YOUR ES PASSWORD"
accountId = "YOUR ACCOUNT ID" # Log Configuration
my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)
handler = logging.handlers.SysLogHandler(address = '/dev/log')
my_logger.addHandler(handler) session = requests.Session() cookies = {
'__s': '-',
} headers = {
'Connection': 'keep-alive',
'Pragma': 'no-cache',
'Cache-Control': 'no-cache',
'accept': '*/*',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36',
'content-type': 'application/json',
'Origin': domain,
'Sec-Fetch-Site': 'same-origin',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Dest': 'empty',
'Referer': domain,
'Accept-Language': 'en-US,en;q=0.9,he;q=0.8',
'x-api-key':API_KEY
} myCookies = session.cookies

Parsing the AccountMetrics Fields

This section contains a sample Python script that parses the accountMetrics data. In the tsTokv function, the first loop runs on all the sites in your account. Then, the second loop runs on all the interfaces of each site. The script parses the data and add it to the “ts” (timestamp) object. And, the third loop runs on all the periods for each interface. The periods are the duration of events.

For example, in a period with the title: packet loss of 10 minutes from start time: 06:00 to end time: 06:10 there are several events of packet loss. The script parses these periods and adds them to a flat structure in the annotations object. This structure makes it easier for Elasticsearch to show the data.

#! /usr/bin/python
import requests
import json
import CatoApiVar
import elasticsearch
from elasticsearch import Elasticsearch
import datetime #Getting Seesion variables like cookies
my_logger = CatoApiVar.my_logger
es = Elasticsearch([{'host': 'localhost', 'port': 9200}],http_auth=(CatoApiVar.ES_USER, CatoApiVar.ES_PASS))
#New JSON
ts = {}
annotations = {}
def tsTokv(parsedContent):
for i in range (len(parsedContent['data']['accountMetrics']['sites'])):
ts.update({'account':parsedContent['data']['accountMetrics']['id']})
ts.update({'site':parsedContent['data']['accountMetrics']['sites'][i]['name']})
annotations.update({'site':parsedContent['data']['accountMetrics']['sites'][i]['name']})
annotations.update({'@timestamp': datetime.datetime.now()})
annotations.update({'account':parsedContent['data']['accountMetrics']['id']})
for x in range (len(parsedContent['data']['accountMetrics']['sites'][i]['interfaces'])):
ts.update({'interface':parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['name']})
ts.update({'socketInfo':parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['socketInfo']})
ts.update({'remoteIPInfo':parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['remoteIPInfo']})
ts.update({'remoteIP':parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['remoteIP']})
annotations.update({'interface':parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['name']})
if parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['periods'] > 0 :
for z in range (len(parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['periods'])) :
annotations.update({'period': z,"title": parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['periods'][z]['title'],'type':parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['periods'][z]['type']})
for gz in range (len(parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['periods'][z]['duration'])):
annotations.update({'start': parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['periods'][z]['duration'][0],'end':parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['periods'][z]['duration'][1]})
for y in range (len(parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['timeseries'])):
ts.update({'telemetry':parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['timeseries'][y]['label']})
for kv in range (len(parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['timeseries'][y]['data'])) :
ts.update({'@timestamp': parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['timeseries'][y]['data'][kv][0],"value":parsedContent['data']['accountMetrics']['sites'][i]['interfaces'][x]['timeseries'][y]['data'][kv][1]})
res = es.index(index="stats", body=ts)
resAnn = es.index(index="annotations", body=annotations)
my_logger.info('Metrics for account where added '+ res['result'])

Sending the Data to the Elasticsearch Server

The following script is a sample script that shows how to send the parsed accountMetrics data from the previous step to the Elasticsearch account.

#! /usr/bin/python
import elasticsearch
from elasticsearch import Elasticsearch
import json , csv
import CatoGetAccountMetricsPret
import datetime
import CatoApiVar
import CatotsTokv
my_logger = CatoApiVar.my_logger
es = Elasticsearch([{'host': 'localhost', 'port': 9200}],api_key=CatoApiVar.ES_API_KEY)
timeago = datetime.datetime.now() - datetime.timedelta(minutes=5)
timeframe = "utc." + timeago.strftime("%Y-%m-%d/{%H:%M:%S") + datetime.datetime.now().strftime("--%H:%M:%S}")
sites = [""]
my_logger.info('Time frame for metrics call '+ timeframe)
doc = CatoGetAccountMetricsPret.getAcountMetrics(CatoApiVar.accountId,60,timeframe,sites)
doc = json.loads(doc)
my_logger.info('Pulled account metrics for '+ timeframe + " " + str(doc) )
timeseries = {
"@timestamp": datetime.datetime.now()
}
doc.update(timeseries)
CatotsTokv.tsTokv(doc)

Was this article helpful?

0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.