Filebeat¶
From https://www.elastic.co/beats/filebeat:
Filebeat helps you keep the simple things simple by offering a lightweight way to forward and centralize logs and files.
On an Evaluation installation, Filebeat sends logs directly to Elasticsearch. For other installation types, Filebeat sends to Logstash.
Configuration¶
You can configure Filebeat inputs and output using Salt. An example of the filebeat pillar can be seen at https://github.com/Security-Onion-Solutions/securityonion/blob/master/salt/filebeat/pillar.example
Any inputs that are added to the pillar definition will be in addition to the default defined inputs. In order to prevent a Zeek log from being used as input, the zeeklogs:enabled
pillar will need to be modified. The easiest way to do this is via so-zeek-logs.
Diagnostic Logging¶
Filebeat’s log can be found in /opt/so/log/filebeat/
.
To debug Filebeat, copy /opt/so/saltstack/default/salt/filebeat/etc/filebeat.yml
to /opt/so/saltstack/local/salt/filebeat/etc/filebeat.yml
, then change the logging.level
value to debug
. Next, restart Filebeat with so-filebeat-restart
. Be sure to remove the local
file after debugging.
Depending on what you’re looking for, you may also need to look at the Docker logs for the container:
sudo docker logs so-filebeat
Modules¶
We support official Filebeat modules and you can learn more at https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-modules.html.
Example 1: AWS Cloudtrail Logs¶
If you would like to parse AWS Cloudtrail logs using the Filebeat cloudtrail
module, you can enable the Filebeat module on any nodes that require it. Depending on your deployment, you might add the following configuration to the global pillar in global.sls
, the manager’s minion pillar in /opt/so/saltstack/local/pillar/minions/$managername_manager.sls
, and/or the search node pillars in /opt/so/saltstack/local/pillar/minions/
. If you have a distributed deployment using cross cluster search, then you will need to enable it for the manager and each search node. If you have a distributed deployment using Elastic clustering, then it only needs to be enabled for the manager.
Here’s the configuration:
filebeat:
third_party_filebeat:
modules:
aws:
cloudtrail:
enabled: true
var.queue_url: https://sqs.$REGION.amazonaws.com/$ACCOUNTID/$QUEUENAME
var.access_key_id: ABCD1234
var.secret_access_key: ABCD1234ABCD1234
Access key details can be found within the AWS console by navigating to My Security Credentials
-> Access Keys
.
Example 2: Fortinet Logs¶
If you would like to parse Fortinet logs using the Filebeat fortinet
module, you can enable the Filebeat module on any nodes that require it. Depending on your deployment, you might add the following configuration to the global pillar in global.sls
, the manager’s minion pillar in /opt/so/saltstack/local/pillar/minions/$managername_manager.sls
, and/or the search node pillars in /opt/so/saltstack/local/pillar/minions/
. If you have a distributed deployment using cross cluster search, then you will need to enable it for the manager and each search node. If you have a distributed deployment using Elastic clustering, then it only needs to be enabled for the manager.
Here’s the configuration:
filebeat:
third_party_filebeat:
modules:
fortinet:
firewall:
enabled: true
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9004
(Please note that Firewall ports still need to be opened on the minion to accept the Fortinet logs.)
Walkthrough: AWS Cloudtrail Logs¶
In this brief walkthrough, we’ll use the aws
module for Filebeat to ingest cloudtrail
logs from Amazon Web Services into Security Onion.
Credit goes to Kaiyan Sheng and Elastic for having an excellent starting point on which to base this walkthrough: https://www.elastic.co/blog/getting-aws-logs-from-s3-using-filebeat-and-the-elastic-stack.
Please follow the steps below to get started.
The official Elastic documentation for the Google Workspace module can be found here:
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-aws.html
NOTE: This module requires that the user have a valid AWS service account, and credentials/permissions to access to the SQS queue we will be configuring.
AWS Cloudtrail Configuration
Create an SQS queue:
Navigate to Amazon SQS
-> Queues
, and click Create queue
.
Specify queue details, choosing to use a Standard
queue, and providing a name:

Specify an Advanced policy and add policy configuration (changing to suit your environment, as needed):
{
"Version": "2012-10-17",
"Id": "example-ID",
"Statement": [
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": [
"SQS:SendMessage"
],
"Resource": "arn:aws:sqs:<region>:<account-id>:<queue-name>",
"Condition": {
"StringEquals": { "aws:SourceAccount": "<account-id" }
}
}
]
}
After the queue has been created, you will be redirected to a summary screen.
From here, copy the provided URL
value. This value will be used to populate the queue URL in Security Onion’s Filebeat configuration.
Create a Trail:
We’ll create a trail using the AWS Cloudtrail console. To get to the Cloudtrail console, search for cloudtrail
in the AWS search bar at the top of the screen within the main console, and select CloudTrail:

From the main page of the Cloudtrail console, we can create our trail by clicking Create a trail
:

Next, we’ll configure some basic details, and choose to use a new s3 bucket with our trail:

We’ll also need to specify an alias for a KMS key:

Scroll down, and click Next
.
From here, we’ll select the type of log events we want to include with our trail:

We’ll then review our changes and click Create Trail
:

The trail should now be created and viewable in Cloudtrail
-> Trails
. The Status
column should display as Logging
. Because we chose to create a new bucket when creating the trail, an s3 bucket should already be created.
We’ll need to ensure our bucket is configured correctly by modifying the event notification properties. To do this, we’ll navigate to Amazon S3
-> $BucketName
-> Properties
-> Event notifications
-> Create event notification
:

Under Event Types
, we can select the type of events for which we would like to receive notifications to our SQS queue:

We’ll also need to select the queue where events will be published:

If we would like to log bucket access events, we can enable Server Access Logging
(within the bucket Properties
section):

Security Onion Configuration
Now that we’ve configured our Cloudtrail trail and SQS queue, we need to place our credential information into our Filebeat module configuration within Security Onion. In this example, we’ll edit the minion pillar for the node we want to pull in the AWS Cloudtrail logs – in this case, a standalone node. In a distributed environment, this would likely be the manager node.
Edit /opt/so/saltstack/local/pillar/minions/$minion_standalone.sls
, adding the following configuration (if you are already using other modules, simply append the module specific configuration without adding the filebeat.third_party_filebeat.modules portion):
filebeat:
third_party_filebeat:
modules:
aws:
cloudtrail:
enabled: true
var.queue_url: https://sqs.us-east-2.amazonaws.com/$youraccountid/demo-queue
var.access_key_id: ABCDE1234
var.secret_access_key: AbCdeFG...
Next, restart Filebeat on the node, with so-filebeat-restart
.
After a few minutes, assuming there are logs to be gathered, Filebeat should pull in those logs from AWS, and an Elasticsearch index named so-aws-$DATE
should be created. This can be verified by navigating to Hunt or Kibana, searching for event.module:aws
:

We can also run the so-elasticsearch-query
command, like so:
so-elasticsearch-query _cat/indices | grep aws

Congratulations! You’ve ingested AWS Cloudtrail logs into Security Onion!
Walkthrough: Google Workspace Audit Logs¶
In this brief walkthrough, we’ll use the google_workspace
module for Filebeat to ingest admin
and user_accounts
logs from Google Workspace into Security Onion.
Please follow the steps below to get started.
The official Elastic documentation for the Google Workspace module can be found here:
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-google_workspace.html
NOTE: This module requires that the user have a valid Google Workspace administrator account. You’ll also need to set up a project within Google Cloud if that has not already been done (will set up as needed during the walkthrough).
Google Cloud/Workspace Configuration
Google provides documentation for setting up a service account here:
https://support.google.com/workspacemigrate/answer/9222993?hl=en
In this example, we’ll choose the automated method of service account creation (using a script and the Cloud Shell).
We can enter the Cloud Shell by clicking the Cloud Shell icon (right-hand side of screen) from console.cloud.google.com (signed in as our Google Workspaces Super Administrator):

Once opened, we will run the following command:
python3 <(curl -s -S -L https://git.io/gwm-create-service-account)

After running the command, we will be provided a menu (press Enter to continue):

The script will proceed through the steps until the first phase of setup is complete:

After the first phase of setup, you will be provided a URL to visit and authorize the changes. When authorizing changes, make sure to add the following OAuth scope to the client:
https://www.googleapis.com/auth/admin.reports.audit.readonly

Navigate back to the Cloud Shell and press Enter to proceed through the rest of the setup:

You will be prompted to download a file containing the service account credentials:

Ensure this file is kept safe. We will provide it to Filebeat in the Security Onion Filebeat module configuration.
Security Onion Configuration
Now that we’ve set up a service account and obtained a credentials file, we need to place it into our Filebeat module configuration within Security Onion. In this example, we’ll edit the minion pillar for the node we want to pull in the Google Workspace logs – in this case, a standalone node. In a distributed environment, this would likely be the manager node.
Copy the credentials file to /opt/so/conf/filebeat/modules/
as credentials_file.json
.
Edit /opt/so/saltstack/local/pillar/minions/$minion_standalone.sls
, adding the following configuration (if you are already using other modules, simply append the module specific configuration without adding the filebeat.third_party_filebeat.modules portion):
filebeat:
third_party_filebeat:
modules:
google_workspace:
admin:
enabled: true
var.jwt_file: "/usr/share/filebeat/modules.d/credentials_file.json"
var.delegated_account: "adminuser@yourdomain.com"
user_accounts:
enabled: true
var.jwt_file: "/usr/share/filebeat/modules.d/credentials_file.json"
var.delegated_account: "adminuser@yourdomain.com"
Next, restart Filebeat on the node, with so-filebeat-restart
.
After a few minutes, assuming there are logs to be gathered, Filebeat should pull in those logs from Google Workspace, and an Elasticsearch index named so-google_workspace-$DATE
should be created. This can be verified by navigating to Hunt or Kibana, searching for event.module:google_workspace
:

We can also run the so-elasticsearch-query
command, like so:
so-elasticsearch-query _cat/indices | grep google_workspace

Congratulations! You’ve ingested Google Workspace logs into Security Onion!
Walkthrough: Okta System Logs¶
In this brief walkthrough, we’ll use the okta
module for Filebeat to ingest system
logs from Okta into Security Onion. Please follow the steps below to get started.
The official Elastic documentation for the Okta module can be found here:
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-okta.html
NOTE: This module requires that the user have a valid API token for access to their Okta instance.
Okta Configuration
Within the Okta administrative console, from the pane on the left-hand side of the screen, navigate to Security-> API
.

Next, navigate to Tokens, and click Create Token
:

Enter a name for the token, then click Create Token
:

A confirmation message like the following should appear:

Ensure the token provided below the message is saved and stored securely.
Security Onion Configuration
Now that we’ve got our token, we need to place it into our Filebeat module configuration within Security Onion. In this example, we’ll edit the minion pillar for the node we want to pull in the Okta logs – in this case, a standalone node. In a distributed environment, this would likely be the manager node.
Edit /opt/so/saltstack/local/pillar/minions/$minion_standalone.sls
, adding the following configuration (if you are already using other modules, simply append the module specific configuration without adding the filebeat.third_party_filebeat.modules portion):
filebeat:
third_party_filebeat:
modules:
okta:
system:
enabled: true
var.url: https://$yourdomain/api/v1/logs
var.api_key: '$yourtoken'
Next, restart Filebeat on the node, with so-filebeat-restart
.
After a few minutes, assuming there are logs to be gathered, Filebeat should pull in those logs from Okta, and an Elasticsearch index named so-okta-$DATE
should be created. This can be verified by navigating to Hunt or Kibana, searching for event.module:okta
:

We can also run the so-elasticsearch-query
command, like so:
so-elasticsearch-query _cat/indices | grep okta

Congratulations! You’ve ingested Okta logs into Security Onion!
Walkthrough: Netflow Logs¶
In this brief walkthrough, we’ll use the netflow
module for Filebeat to ingest Netflow logs into Security Onion.
Note
Check out our Netflow video at https://youtu.be/ew5gtVjAs7g!
The official Elastic documentation for the Netflow module can be found here:
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-netflow.html
Overview of steps:
- enable third party module
- update docker config
- update firewall config
- build logstash pipeline
Enable third party module
If you would like to ingest Netflow logs using the Filebeat netflow
module, you can enable the Filebeat module on any nodes that require it. Depending on your deployment, you might add the following configuration to the global pillar in global.sls
, the manager’s minion pillar in /opt/so/saltstack/local/pillar/minions/$managername_manager.sls
, and/or the search node pillars in /opt/so/saltstack/local/pillar/minions/
. If you have a distributed deployment using cross cluster search, then you will need to enable it for the manager and each search node. If you have a distributed deployment using Elastic clustering, then it only needs to be enabled for the manager.
Here’s the configuration:
filebeat:
third_party_filebeat:
modules:
netflow:
log:
enabled: true
var.netflow_host: 0.0.0.0
var.netflow_port: 2055
Update docker config
Next, we need to add an extra listening port to the Filebeat container. We’ll start by making a local copy the filebeat init.sls
file.
sudo cp /opt/so/saltstack/default/salt/filebeat/init.sls /opt/so/saltstack/local/salt/filebeat/init.sls
Next, set permissions on the file:
sudo chown socore:socore /opt/so/saltstack/local/salt/filebeat/init.sls
Edit /opt/so/saltstack/local/salt/filebeat/init.sls
and add port 2055
to the port_bindings
section of the so-filebeat config:
- port_bindings:
- 0.0.0.0:514:514/udp
- 0.0.0.0:514:514/tcp
- 0.0.0.0:2055:2055/udp
- 0.0.0.0:5066:5066/tcp
Save the file and run sudo salt-call state.apply filebeat
to allow Salt to recreate the container. You can check that the config has applied by running sudo docker ps | grep so-filebeat
. You should see 0.0.0.0:2055->2055/udp
among the other existing listening ports.
Update firewall config
The next step is to add a host group and port group for Netflow traffic to allow it through the firewall. Replace 172.30.0.0/16
with whatever is appropriate for your network.
sudo so-firewall addhostgroup netflow
sudo so-firewall addportgroup netflow
sudo so-firewall includehost netflow 172.30.0.0/16
sudo so-firewall addport netflow udp 2055
Edit /opt/so/saltstack/local/pillar/minions/<manager.sls>
to add iptables rules to allow the new netflow groups:
firewall:
assigned_hostgroups:
chain:
DOCKER-USER:
hostgroups:
netflow:
portgroups:
- portgroups.netflow
INPUT:
hostgroups:
netflow:
portgroups:
- portgroups.netflow
Save the file and then run sudo salt-call state.apply firewall
to enable the new firewall rules.
Build logstash pipeline
Now the module is enabled, the container is listening on the right port, and the firewall is allowing traffic to get to the container. Next is to ensure that the Netflow pipeline is enabled, or the data will not be saved to the ES database.
Note: If you have a distributed setup, you need to run the following command on the search nodes as well:
sudo docker exec -i so-filebeat filebeat setup modules --pipelines --modules netflow -M "netflow.log.enabled=true" -c /usr/share/filebeat/module-setup.yml
You should see Loaded Ingest pipelines
. Once that is complete, run sudo so-filebeat-restart
.
Assuming you have Netflow sources sending data, you should now start to see data in Dashboards or Hunt. Group by event.dataset
and you should now have netflow.log
entries appearing.
More Information¶
Note
For more information about Filebeat, please see https://www.elastic.co/beats/filebeat.