ELK4QRadar
This project was created to help SOC MSSP teams that use QRadar SIEM with multiple clients to collecte and centralize monitoring statistics from all QRadar deployments.
Project is available on github ​

Guide

  1. 1.
    PUT _template/<YOUR_TEMPLATE_NAME>. In this repository we provide an index template that you can in your Elastic Stack
  2. 2.
    Populate the YAML files in /etc/logstash with the appropriate data to your context. We Provide samples in this project:
    • ​timezone.yml: Contains dictionary of client name and their correspondant timezones.
    • ​clientnames.yml : Contains a dictionary of input configuration tags and their correspondant client names
  3. 3.
    Copy conf.d configuration in your Logstash conf.d folder and customize to your needs.
  4. 4.
    Create a /home/USER/Offenses/ folder to save the extracted search data from QRadar in CSV.
  5. 5.
    Create the following scripted fields in Kibana
Name
Lang
Script
Format
offense.day_of_week
painless
doc['@timestamp'].value.dayOfWeekEnum
String
offense.hour_of_day
painless
doc['@timestamp'].value.hourOfDay
Number

Metrics samples

  • Busiest Day
  • Busiest Hour
  • Offenses average by day of week

Index Template

I created a custom template for this use case:
1
{
2
"index_patterns": [
3
"soc-statistics-offenses-*"
4
],
5
"template": {
6
"settings": {
7
"number_of_shards": 1,
8
"number_of_replicas": 0
9
},
10
"mappings": {
11
"dynamic_templates": [
12
{
13
"strings_as_keyword": {
14
"mapping": {
15
"ignore_above": 1024,
16
"type": "keyword"
17
},
18
"match_mapping_type": "string"
19
}
20
}
21
],
22
"properties": {
23
"@timestamp": {
24
"type": "date"
25
},
26
"offense": {
27
"properties": {
28
"owner": {
29
"type": "keyword"
30
},
31
"note": {
32
"properties": {
33
"date": {
34
"format": "MMM d, yyyy, h:mm:ss a",
35
"type": "date"
36
}
37
}
38
},
39
"storagetime": {
40
"format": "yyyy-MM-dd hh:mm:ss a",
41
"type": "date"
42
},
43
"assigned": {
44
"properties": {
45
"date": {
46
"format": "MMM d, yyyy, h:mm:ss a",
47
"type": "date"
48
}
49
}
50
},
51
"id": {
52
"type": "keyword"
53
},
54
"starttime": {
55
"format": "yyyy-MM-dd hh:mm:ss a",
56
"type": "date"
57
},
58
"logsourcetime": {
59
"format": "yyyy-MM-dd hh:mm:ss a",
60
"type": "date"
61
},
62
"close": {
63
"properties": {
64
"date": {
65
"format": "MMM d, yyyy, h:mm:ss a",
66
"type": "date"
67
},
68
"reason": {
69
"type": "text"
70
},
71
"analyst": {
72
"type": "keyword"
73
}
74
}
75
},
76
"hour_of_day": {
77
"type": "keyword"
78
},
79
"status": {
80
"type": "keyword"
81
},
82
"day_of_week": {
83
"type": "keyword"
84
}
85
}
86
},
87
"domain": {
88
"properties": {
89
"name": {
90
"type": "keyword"
91
}
92
}
93
},
94
"rule": {
95
"properties": {
96
"severity": {
97
"type": "keyword"
98
},
99
"risk_score": {
100
"type": "keyword"
101
},
102
"name": {
103
"type": "keyword"
104
},
105
"threat": {
106
"properties": {
107
"technique": {
108
"properties": {
109
"name": {
110
"type": "keyword"
111
},
112
"id": {
113
"type": "keyword"
114
}
115
}
116
},
117
"tactic": {
118
"properties": {
119
"name": {
120
"type": "keyword"
121
},
122
"id": {
123
"type": "keyword"
124
}
125
}
126
}
127
}
128
},
129
"category": {
130
"type": "keyword"
131
},
132
"class": {
133
"type": "keyword"
134
}
135
}
136
},
137
"client": {
138
"properties": {
139
"name": {
140
"type": "keyword"
141
}
142
}
143
},
144
"analyst": {
145
"type": "nested",
146
"properties": {
147
"notes": {
148
"type": "text"
149
},
150
"username": {
151
"type": "keyword"
152
}
153
}
154
},
155
"event": {
156
"properties": {
157
"timezone": {
158
"type": "keyword"
159
},
160
"name": {
161
"type": "keyword"
162
}
163
}
164
},
165
"tags": {
166
"type": "keyword"
167
}
168
}
169
}
170
}
171
}
Copied!

Logstash Configuration Files

This part of the project contains logstash configuration files that will process and parse files CSV files saved by the python script in /home/elk/Offenses notice here that I am storing my AQL search results in Offenses folder at elk user's home folder.
PS : Please see the index template definition to have basic understanding of the defined fields used in this project.
Logstash pipelines ar organized in three parts :
  • Input configurations : Make an input configuration for each file you wanna ingest into elasticsearch.
Example :
1
input {
2
file {
3
path => "/home/<USER>/<FOLDER NAME>/<FILENAME>.csv"
4
start_position => beginning
5
tags => "<MY_CLIENT>"
6
type => "OFFENSES"
7
}
8
}
Copied!
  • Filter configuration : For processing and enriching the incoming data and normalizing event fields.
  • Output configuration : Used for sending data to Elasticsearch.
Example :
1
output {
2
if [type] == "OFFENSES" {
3
elasticsearch {
4
hosts => ["https://localhost:9200"]
5
index => "soc-statistics-offenses-%{[client][name]}-%{+yyyy.MM}"
6
#manage_template => false
7
cacert => "/etc/logstash/root-ca.pem"
8
user => "<USERNAME>"
9
password => "<PASSWORD>"
10
ssl => true
11
ssl_certificate_verification => false
12
}
13
}
14
}
Copied!
Last modified 7mo ago