Splunk Syslog Connector (SC4S)

Placeholder Image

These steps are to configure syslog-ng using Podman in a development environment, once its working plan for a production environment.

I followed all the documentation from the official site, https://splunk-connect-for-syslog.readthedocs.io/en/master/ but wanted to get my head around it all, so I put this blog as a reference point.

You can use Docker if you want, I preferred Podman. (Redhat have their own container called Podman)

Podman consists of just a single command to run on the command line. There are no daemons in the background doing stuff, and this means that Podman can be integrated into system services through systemd)

You will need to understand how Spunk works under the hood, have some basics of syslog and containers, so get with the program before you start!

Pre-requisites

Step 1 Configure Indexes (These will be used by the SC4S connector), the default indexes can be changed, but use these as a starting point.

indexes.conf
[email]
homePath = $SPLUNK_DB/email/db
coldPath = $SPLUNK_DB/email/colddb
thawedPath = $SPLUNK_DB/email/thaweddb
frozenTimePeriodInSecs =604800
maxTotalDataSizeMB = 512000

[netauth]
homePath = $SPLUNK_DB/netauth/db
coldPath = $SPLUNK_DB/netauth/colddb
thawedPath = $SPLUNK_DB/netauth/thaweddb
frozenTimePeriodInSecs =604800
maxTotalDataSizeMB = 512000

[netfw]
homePath = $SPLUNK_DB/netfw/db
coldPath = $SPLUNK_DB/netfw/colddb
thawedPath = $SPLUNK_DB/netfw/thaweddb
frozenTimePeriodInSecs =604800
maxTotalDataSizeMB = 512000

[netids]
homePath = $SPLUNK_DB/netids/db
coldPath = $SPLUNK_DB/netids/colddb
thawedPath = $SPLUNK_DB/netids/thaweddb
frozenTimePeriodInSecs =604800
maxTotalDataSizeMB = 512000

[netops]
homePath = $SPLUNK_DB/netops/db
coldPath = $SPLUNK_DB/netops/colddb
thawedPath = $SPLUNK_DB/netops/thaweddb
frozenTimePeriodInSecs =604800
maxTotalDataSizeMB = 512000

[netproxy]
homePath = $SPLUNK_DB/netproxy/db
coldPath = $SPLUNK_DB/netproxy/colddb
thawedPath = $SPLUNK_DB/netproxy/thaweddb
frozenTimePeriodInSecs =604800
maxTotalDataSizeMB = 512000

[netipam]
homePath = $SPLUNK_DB/netipam/db
coldPath = $SPLUNK_DB/netipam/colddb
thawedPath = $SPLUNK_DB/netipam/thaweddb
frozenTimePeriodInSecs =604800
maxTotalDataSizeMB = 512000

[oswinsec]
homePath = $SPLUNK_DB/oswinsec/db
coldPath = $SPLUNK_DB/oswinsec/colddb
thawedPath = $SPLUNK_DB/oswinsec/thaweddb
frozenTimePeriodInSecs =604800
maxTotalDataSizeMB = 512000

[osnix]
homePath = $SPLUNK_DB/osnix/db
coldPath = $SPLUNK_DB/osnix/colddb
thawedPath = $SPLUNK_DB/osnix/thaweddb
frozenTimePeriodInSecs =604800
maxTotalDataSizeMB = 512000

[em_metrics]
homePath = $SPLUNK_DB/em_metrics/db
coldPath = $SPLUNK_DB/em_metrics/colddb
thawedPath = $SPLUNK_DB/em_metrics/thaweddb
datatype = metric
frozenTimePeriodInSecs = 2419200
repFactor = auto

 

Step 2

Configure HEC
Create an HEC app and Deploy it onto the AIO or your indexer endpoint – Change the TOKEN or use the one below, it’s only for dev purposes.

#This is to enable HEC
[http]
disabled = 0
port = 8088

 

#This is default sources
[http://syslog]
disabled = 0
index = syslog_test
token = df800b50-6ab6-4830-a080-efc3f0e7b2f3

sourcetype = syslog:unassigned
indexes = email,main,netfw,netids,netipam,netops,netproxy,osnix,oswinsec,syslog_test,em_metrics

Step 3 Ensure the indexes and HEC points are available in Splunk

Some of Indexes – Example

sc4s1

HEC Endpoint

sc4s2

 

Step 6 Remove Rsyslog

As this comes with most Linux OS platforms, its already running, if not then move onto the next step, otherwise remove it, or you will get conflicts port 514 etc

sudo systemctl stop rsyslog.service
sudo systemctl disable rsyslog.service

(Removed symlink /etc/systemd/system/multi-user.target.wants/rsyslog.service)

sudo yum remove rsyslog

 

Step 5 Install Podman

sudo yum install git
sudo yum -y install podman

Check podman install

sudo rpm -qi podman

sudo podman info

sc4s3

Step 6 Config Podman Service

cd  /lib/systemd/system

sudo vim ./sc4s.service

Add the below

[Unit]
Description=SC4S Container
Wants=NetworkManager.service network-online.target
After=NetworkManager.service network-online.target

[Install]
WantedBy=multi-user.target

[Service]
Environment=”SC4S_IMAGE=splunk/scs:latest”

# Required mount point for syslog-ng persist data (including disk buffer)
Environment=”SC4S_PERSIST_VOLUME=-v splunk-sc4s-var:/opt/syslog-ng/var”

# Optional mount point for local overrides and configurations; see notes in docs
Environment=”SC4S_LOCAL_CONFIG_MOUNT=-v /opt/sc4s/local:/opt/syslog-ng/etc/conf.d/local:z”

# Optional mount point for local disk archive (EWMM output) files
# Environment=”SC4S_LOCAL_ARCHIVE_MOUNT=-v /opt/sc4s/archive:/opt/syslog-ng/var/archive:z”

# Uncomment the following line if custom TLS certs are provided
# Environment=”SC4S_TLS_DIR=-v /opt/sc4s/tls:/opt/syslog-ng/tls:z”

TimeoutStartSec=0
Restart=always

ExecStartPre=/usr/bin/podman pull $SC4S_IMAGE
ExecStartPre=/usr/bin/podman run \
–env-file=/opt/sc4s/env_file \
“$SC4S_LOCAL_CONFIG_MOUNT” \
–name SC4S_preflight \
–rm $SC4S_IMAGE -s
ExecStart=/usr/bin/podman run -p 514:514 -p 514:514/udp -p 6514:6514 \
–env-file=/opt/sc4s/env_file \
“$SC4S_PERSIST_VOLUME” \
“$SC4S_LOCAL_CONFIG_MOUNT” \
“$SC4S_LOCAL_ARCHIVE_MOUNT” \
“$SC4S_TLS_DIR” \
–name SC4S \
–rm $SC4S_IMAGE

Step 7 Create Folders

Run

sudo podman volume create splunk-sc4s-var

(Creates folder in /var/lib/containers/storage/volumes/)

Run

sudo mkdir /opt/syslog-ng
sudo mkdir /opt/syslog-ng/var
sudo mkdir /opt/sc4s
sudo mkdir  /opt/sc4s/local
sudo mkdir /opt/sc4s/archive
sudo mkdir /opt/sc4s/tls

Step 8 Create environment file and add config

sudo vim /opt/sc4s/env_file

Add the below (Change your host name and token if need be – leave the TLS for now you can do that later if you want)

SPLUNK_HEC_URL=https://CHANGE TO YOUR SPLUNK SERVER NAME:8088
SPLUNK_HEC_TOKEN=df800b50-6ab6-4830-a080-efc3f0e7b2f3
SC4S_DEST_SPLUNK_HEC_WORKERS=6
#Uncomment the following line if using untrusted SSL certificates
SC4S_DEST_SPLUNK_HEC_TLS_VERIFY=no

Step 9 Start Sc4S

sudo systemctl daemon-reload
sudo systemctl enable sc4s
sudo systemctl start sc4s

Step 10 Check podman status

sudo systemctl status sc4s

sc4s4

sudo podman ps –asc4s5

 

Step 11 Login to Splunk and check service

The below should show some data coming from the connector, its normally in the main index.

sc4s6

 

Due my lab limitations I don’t have syslog devices, but the above should get you to a good point in the dev environment, so now focus on the common syslog devices it supports and get some data in, see the SOURCES section in the below link!

For further SC4S documentation, click on this link

https://splunk-connect-for-syslog.readthedocs.io/en/master/#welcome-to-splunk-connect-for-syslog

I will look at using non-root for this service, TLS, and configuring extra storage another time, which is all in the above link.

Security Monitoring Success

Placeholder Image

Security Monitoring is a vast topic, it can be daunting to begin with, you may well say to yourself “where do I start!”

Over the last two years as a Splunk consultant, I have gained a lot of insight into the world of security monitoring, and this post will give some structure on what you can do to get started. You don’t have to run through all the steps, as some small, medium organisation cannot afford to do these, so pick and choose the ones that you think are feasible and run through the process.

The security project can take anything from a 1 week to years, it depends on the size and requirements, but this approach should give you some guidance, and the steps you need to take, and over time you will have a mature, and successful solution. Good luck!

siempath

Splunk Quick Health Check

Placeholder Image

During various Splunk projects, I found some customers were not using DMC, as it was not part of the deployment architecture, so I put together a number of health check searches, which then led to this simple app.

Therefore, it is useful for such situations, and not intended to take place over the DMC, which is the preferred health checker.

You can deploy the DMC on Master Nodes for Cluster deployment scenarios

You can deploy the DMC on non-clustered scenario’s, on  a SH, Licence Master, Deployment Server <= 50 nodes.

My App is normally deployed onto a SH

sqh

Here is the link:-

https://github.com/iopsmon/splunk_quick_health_check

Done!

Watch List In Splunk

Placeholder Image

This is a simple way to put someone on a watchlist, it is not only external threats, but internal ones as well, and this is useful to see if anyone is exfiltrating sensitive files or trying to connect to un-authorised hosts.

Try to look, logs without Splunk, this will take you ages, life will have evolved by then.

In one click you get a digital footprint of the person on the watch list, you get a quick check of login success’s and failure’s,  a list of the commands they have been running, and the hosts they have been trying to login into.

Create a lookup, authorised_users.csv, add to lookup folder, and then create a simple dashboard, which uses drop down input from the csv file and create the widgets as required.

authorised_users.csv

user,is_approved
bsimpson,yes
hford,yes
splunk,yes

Create a token base on the user, and run search examples such as below.

| from datamodel: “Authentication”.”Failed_Authentication” | search user=$user_token$ | stats count

watchlist

Done

Splunk & CIS Top 20 Security Monitoring

Placeholder Image
The purpose of this blog is to provide a quick overview on one can exploit Splunk and the Centre for Internet Security Critical Security Controls for Effective Cyber Defence best practises, otherwise known as CIS top 20.  The best practises consist of 20 key security controls (CSC), that an organisation could use to block or mitigate cyber-attacks and improve their security posture overall. The CIS CSC are ranked in order of overall importance and application to a corporate security strategy. For example, the first two controls, surrounding known inventory, are at the top of the list and are foundational in nature, ranking “very high” for attack mitigation.

The published top 20 controls are as follows:
CSC 1: Inventory of Authorized and Unauthorized Devices
CSC 2: Inventory of Authorized and Unauthorized Software
CSC 3: Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers
CSC 4: Continuous Vulnerability Assessment and Remediation
CSC 5: Controlled Use of Administrative Privileges
CSC 6: Maintenance, Monitoring, and Analysis of Audit Logs
CSC 7: Email and Web Browser Protections
CSC 8: Malware Defences
CSC 9: Limitation and Control of Network Ports, Protocols, and Services
CSC 10: Data Recovery Capability
CSC 11: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches
CSC 12: Boundary Defence
CSC 13: Data Protection
CSC 14: Controlled Access Based on the Need to Know
CSC 15: Wireless Access Control
CSC 16: Account Monitoring and Control
CSC 17: Security Skills Assessment and Appropriate Training to Fill Gaps
CSC 18: Application Software Security
CSC 19: Incident Response and Management
CSC 20: Penetration Tests and Red Team Exercises

  • Splunk software can be used to build and operate security operations centres of any size
  • Support the full range of information security operations, including posture assessment, monitoring, alert and incident handling, CSIRT, breach analysis and response, and event correlation
  • Out-of-the-box support for SIEM and security use cases
  • Detect known and unknown threats, investigate threats, determine compliance and use advanced security analytics for detailed insight
  • Proven integrated, big data-based security intelligence platform
  • Use ad hoc searches for advanced breach analysis
  • On-premises, cloud, and hybrid on-premises and cloud deployment options
  • Improve operational efficiency with automated and human-assisted decisions by using Splunk as a security nerve centre
  • Indexes data from any machine data source
  • Searches through machine data from a centralized console
  • Allows the security professional to add tags, create event types and correlate the incoming data with business context
  • Proactively monitors and alerts on security incidents, with automatic remediation of security issues – for example, changing a firewall rule in response to Splunk search results
  • Allows for the creation of reports, dashboards and other forms of analytics to communicate security information throughout the organization

So How Can Splunk Help?
Splunk has a free App that one can use to help provide security compliance, this is based on the CIS top 20 security controls as published by the Centre for Internet Security (CIS).

https://splunkbase.splunk.com/app/3064/

cis1

The app provides a data-agnostic framework which exploits Splunk’s data analytics features and functions such as data model, otherwise known as the Common information model (CIM). This features allows Splunk to be data-agnostic as it normalises data for common fields, such as source IP, user etc.

The App has been developed for anyone in the IT security segment, and Splunk Administrators. It relies on data being ingested into Splunk, this would be as part of the Splunk design and assessment process, such as identifying which sources of data should be ingested, example being Windows Active Directory Logs, Firewall Logs and so forth.
Once this data has been ingested and its CIM compliant, the app runs searches and populates the dashboards for CIS compliancy, this will provide Security teams with insights as to their compliancy status and improve their overall security posture.
There are hundreds of free Splunk TA’s (Splunk Technical Add-ons), these provide field extractions, lookups, tags, and event-types, these all help the Splunk CIS app for presenting the data.

If any customisation is required, lookup files and TA’s can be developed for any application that is required for CIS compliancy.

Splunk supports the controls in four ways:

Verification:
As Splunk software ingests data, it can generate reports and dashboards that show compliance or non-compliance with controls. Incidents of non-compliance can generate alerts to SOC personnel.

Execution:
In the case of an attack or non-compliance, Splunk software can carry out recommended actions to meet controls. With version 6.0 of the CIS CSC, Splunk software becomes even more critical, since control 14 surrounding audit logs has been promoted to position

Verification & Execution:
Data from third-party sources can be correlated with data ingested in Splunk software to meet the control.

Support:
The Splunk platform provides flexible features that help security professionals with controls that are largely policy and process based.

Mapping Example – CSC 5

Controlled use of admin privileges can be accomplished with a number of toolsets that restrict the use of administrative accounts. The simplest methods are OS-level tools, like sudo, and controls that can be put in place with vendor-supplied tools like Active Directory, so with this in mind you want to comply with CSC 5: Controlled Use of Administrative Privileges to your IT environment.
Splunk can help by consuming authentication logs from across the technology environment that detail account activity, including how accounts are being accessed and from where. Authentication logs come from, but are not limited to: host devices, domain controllers, directory servers, network devices, application logs and many others. All of this data will be ingested into Splunk software for searching and correlation.
Any use of known administrative accounts like “Administrator” and “root” and “sa” can easily be searched across the entire environment and reported or alerted upon.
The below is an example of a dashboard showing Successful Logins from 10 Most Rare Users – Privileged Accounts

cis2

So, Get Splunking, and the CIS App if you need to implement the CIS CSC, Splunk makes all data in your organization security relevant, as data is indexed by Splunk Enterprise, it becomes instantly searchable and security professionals can easily correlate all of these seemingly disparate data sources. Furthermore, the different data types can be seen in the context of data locked in business systems, which is often the key factor in determining correct root causes. Security professionals can then build dashboards and reports on top of the data, and set up actions and alerts to be executed on specific thresholds. In addition, any analysis can be operationalized to proactively protect your organization from an emerging threat.

Check to see who’s copying DATA from your restricted Linux servers!

Placeholder Image

There’s a lot of insiders that do a lot of copying and this is one method that could help you observe who’s copying the data when they shouldn’t be init!.

 Install the https://splunkbase.splunk.com/app/833/ which is the TA for collecting Linux OS data onto your restricted Linux servers. Once this has been ingested into Splunk, check the sourcetype and ensure the data is correct and the parsing is good.

TA version used there was 5.2.4

SPL = index=linux sourcetype=bash_history

cp1

If the data looks good, create a table, run a simple SPL search to check for any copy or running sudo command for this sourcetype.

SPL = index=linux sourcetype=bash_history sudo OR cp  | table _time,  user_name,  host, bash_command

cp2

From this you could enhance the table with some colours and see which user has been a very naughty boy OR girl!!!!

cp3

You could do other tables or charts which show data being deleted, which could be a disgruntled employee wanting to do some damage before they leave.

Done!

 

 

 

Splunk Stream Is Cool

Placeholder Image

Splunk steam once configured can monitor many protocols over the wire, so I wanted to see what I could get into Splunk.

I configured the stream app https://splunkbase.splunk.com/app/1809/  which includes a binary that captures the packets onto a number of test servers running the universal forwarder. In the real world you may use a tap port or use the independent Stream Forwarder which uses HEC, so you could ingest network data straight to it.

My config was on some test servers to capture the packets via the streamfwd binary.

Follow the Stream documentation for the config: https://docs.splunk.com/Documentation/StreamApp/7.1.2/DeployStreamApp/InstallSplunkAppforStream

Also deploy the Stream App onto the search head which provides the dashboards / props /transforms and configuration of the Stream App

So I wanted a simple check on icmp traffic, so I enabled the icmp protocol in the config in the Stream app Configuration > Configure Streams

stream2

I ran some ping checks and could see the data via a basic SPL:

index=dc_stream sourcetype=”stream:icmp” | table src_ip, dest_ip,  protocol_stack, bytes, bytes_in, bytes_out

stream3

 

I created a simple chart to see the data and which destination has had most icmp packets

SPL: index=dc_stream sourcetype=”stream:icmp” | timechart sum(bytes) as total_bytes by dest_ip

stream4

So this demonstrates how one can capture wire data and then run some SPL to get stats on network traffic your interested in.

Here’s some other stream data dashboards examples that you get.

DNS is a good, you could see how active the DNS server is.stream5

 

Done.

This app is helpful in getting wiredata into Splunk – go check it out

https://splunkbase.splunk.com/app/4372/

 

Windows Services Monitor In Splunk

Placeholder Image

This is a quick way of monitoring your Windows Services

So after ingesting Windows data via the https://splunkbase.splunk.com/app/742/ version 4.8.4, I wanted to see what services were set to auto but not running.

I wanted to ensure any important services were up and actually running, so by running the below search I could capture these services from some test hosts.

SPL: index=windows sourcetype=WinHostMon  Name=* StartMode=Auto State=Stopped | stats values(DisplayName) by host

From the search I could see the SNMP and Firewall services were stopped but should be running.

The below is part config from the Windows Add-On inputs.conf which collects the data. Set these to every 300 seconds (5 mins). Once configured deploy it to some Windows test nodes which run the Universal Forwarder and do some search tests.

win_services_stopped

Ensure you have deployed the add-on props and transforms to the search heads / indexers for the parsing, otherwise you won’t see the field names.

###### Host monitoring ######
[WinHostMon://Process]
interval = 300
disabled = 0
type = Process
index = windows

[WinHostMon://Service]
interval = 300
disabled = 0
type = Service
index = windows

Done.

 

Monitor Linux RPM Installs With Splunk

Placeholder Image

This Splunk config will help you monitor which software packages are being installed on your critical Linux servers.

Watch for RPM packages being installed on some critical Linux Centos/RHEL servers, it could be an indication of someone not following change control or you could use it to monitoring change control and many other use cases to monitor such an event.

Before configuring the below you will need to ensure you have setup Splunk, indexes, uf’s and have some test Linux servers. You also need to have Splunk admin level skills, or be an experienced Splunker.

This config was performed on Centos 7.x servers and Splunk 7.x

Config:

  • A few fields were created using regex, this was done after analysing the logs
  • New tags and event types are created for this config.

Fields: action/ software

Eventypes: yum_packages
Tags: Installed /installed

Configure an inputs / eventypes / tags to monitor the yum log file:

inputs.conf (Deploy to the UF/Linux Server)

[monitor:///var/log/yum.log]
whitelist = (yum.log)
sourcetype = linux_yum
index = syslog
disabled = 0

props.conf (Deploy to the Search Head / Indexers)

[linux_yum]
CHARSET=UTF-8
TIME_PREFIX=^
MAX_TIMESTAMP_LOOKAHEAD=17
TIME_FORMAT=%b %d %H:%M:%S
REPORT-syslog=syslog-extractions
SHOULD_LINEMERGE=false
LINE_BREAKER =([\r\n]+)
KV_MODE = auto
NO_BINARY_CHECK = true
TRUNCATE = 9999
TRANSFORMS=syslog-host
disabled=false

#Extract and action field which is Installed and the software field which is the RPM package installed.
EXTRACT-action = ^(?:[^ \n]* )\d+\s\d+:\d+:\d+\s(?P<action>\w+)
EXTRACT-software = ^(?:[^ \n]* )\d+\s\d+:\d+:\d+\sInstalled:\s\d+:(?P<software>.*)

#normalise the action field as status
FIELDALIAS-action = action as status

Add the below to add event types and tags for the linux_yum sourcetype, this will help with CIM model compliance.

eventtypes.conf (Deploy to the Search Head / Indexers)

[yum_packages]
search = sourcetype= sourcetype=linux_yum
#tags = installed Installed

tags.conf (Deploy to the Search Head / Indexers)

[eventtype=yum_packages]
Installed = enabled
installed = enabled

After the data has been ingested, install some test RPM packages and run the below search, you should get a similar output as in the screenshot.

index=syslog sourcetype=linux_yum action=”Installed”
| rename software as installed_software_rpm
| fields _time, host, action, installed_software_rpm
| eval date=strftime(_time, “%d/%m/%Y %H:%M:%S”)
| stats count by date, action, host, installed_software_rpm

software

Done

Splunk Ports Check Scanner

Placeholder Image

Here’s a simple Splunk port scanning script I put together – its helped me when the ports required have not been opened on clusters members (indexers/search heads) and I was getting connection failed errors – so I thought I’d share this for those that may need to quickly check the Splunk port status in a multiple Splunk server enviroment – you can change the ports for your enviroment, should they have been changed from the default.

https://github.com/iopsmon/splunk_port_scanner