ELK4QRadar
This project was created to help SOC MSSP teams that use QRadar SIEM with multiple clients to collecte and centralize monitoring statistics from all QRadar deployments.
Last updated
This project was created to help SOC MSSP teams that use QRadar SIEM with multiple clients to collecte and centralize monitoring statistics from all QRadar deployments.
Last updated
Project is available on github
PUT _template/<YOUR_TEMPLATE_NAME>
. In this repository we provide an index template that you can in your Elastic Stack
Populate the YAML files in /etc/logstash
with the appropriate data to your context. We Provide samples in this project:
timezone.yml: Contains dictionary of client name and their correspondant timezones.
clientnames.yml : Contains a dictionary of input configuration tags and their correspondant client names
Copy conf.d configuration in your Logstash conf.d folder and customize to your needs.
Create a /home/USER/Offenses/
folder to save the extracted search data from QRadar in CSV.
Create the following scripted fields in Kibana
Name
Lang
Script
Format
offense.day_of_week
painless
doc['@timestamp'].value.dayOfWeekEnum
String
offense.hour_of_day
painless
doc['@timestamp'].value.hourOfDay
Number
Busiest Day
Busiest Hour
Offenses average by day of week
I created a custom template for this use case:
This part of the project contains logstash configuration files that will process and parse files CSV files saved by the python script in /home/elk/Offenses
notice here that I am storing my AQL search results in Offenses folder at elk
user's home folder.
PS : Please see the index template definition to have basic understanding of the defined fields used in this project.
Logstash pipelines ar organized in three parts :
Input configurations : Make an input configuration for each file you wanna ingest into elasticsearch.
Example :
Filter configuration : For processing and enriching the incoming data and normalizing event fields.
Output configuration : Used for sending data to Elasticsearch.
Example :