Monitor Linux RPM Installs With Splunk

Placeholder Image

This Splunk config will help you monitor which software packages are being installed on your critical Linux servers.

Watch for RPM packages being installed on some critical Linux Centos/RHEL servers, it could be an indication of someone not following change control or you could use it to monitoring change control and many other use cases to monitor such an event.

Before configuring the below you will need to ensure you have setup Splunk, indexes, uf’s and have some test Linux servers. You also need to have Splunk admin level skills, or be an experienced Splunker.

This config was performed on Centos 7.x servers and Splunk 7.x


  • A few fields were created using regex, this was done after analysing the logs
  • New tags and event types are created for this config.

Fields: action/ software

Eventypes: yum_packages
Tags: Installed /installed

Configure an inputs / eventypes / tags to monitor the yum log file:

inputs.conf (Deploy to the UF/Linux Server)

whitelist = (yum.log)
sourcetype = linux_yum
index = syslog
disabled = 0

props.conf (Deploy to the Search Head / Indexers)

TIME_FORMAT=%b %d %H:%M:%S
LINE_BREAKER =([\r\n]+)
KV_MODE = auto

#Extract and action field which is Installed and the software field which is the RPM package installed.
EXTRACT-action = ^(?:[^ \n]* )\d+\s\d+:\d+:\d+\s(?P<action>\w+)
EXTRACT-software = ^(?:[^ \n]* )\d+\s\d+:\d+:\d+\sInstalled:\s\d+:(?P<software>.*)

#normalise the action field as status
FIELDALIAS-action = action as status

Add the below to add event types and tags for the linux_yum sourcetype, this will help with CIM model compliance.

eventtypes.conf (Deploy to the Search Head / Indexers)

search = sourcetype= sourcetype=linux_yum
#tags = installed Installed

tags.conf (Deploy to the Search Head / Indexers)

Installed = enabled
installed = enabled

After the data has been ingested, install some test RPM packages and run the below search, you should get a similar output as in the screenshot.

index=syslog sourcetype=linux_yum action=”Installed”
| rename software as installed_software_rpm
| fields _time, host, action, installed_software_rpm
| eval date=strftime(_time, “%d/%m/%Y %H:%M:%S”)
| stats count by date, action, host, installed_software_rpm



Splunk Ports Check Scanner

Placeholder Image

Here’s a simple Splunk port scanning script I put together – its helped me when the ports required have not been opened on clusters members (indexers/search heads) and I was getting connection failed errors – so I thought I’d share this for those that may need to quickly check the Splunk port status in a multiple Splunk server enviroment – you can change the ports for your enviroment, should they have been changed from the default.


Splunk And Windows Security Logs

Placeholder Image

There’s a plethora of Windows Security events you can monitor, so the task can become overwhelming right, which ones etc, so these are just a few of the key ones
to start with and then you expand them as you go on, of course you can enable and ingest all of them, but you’ll need to keep an eye on the volume and daily quota limit as Windows logs can be very noisy.

So look at the ones below, and monitor these as a quick start set.


Good Microsoft Article On Which Events:

4719 System audit policy was changed.
4964 Special groups have been assigned to a new logon.
1102 The audit log was cleared.
4706 A new trust was created to a domain.
4724 An attempt was made to reset an account’s password.
4739 Domain Policy was changed.
4608 Windows Is Starting Up.
4609 Windows Is Shutting Down.
4625 An account failed to log on.
4648 Privilege Escalation.
4697 Attempt To Install a Service.
4700 A scheduled task was enabled.
4720 A user account was created.
4722 A user account was enabled
4723 An attempt was made to change an account’s password.
4725 A user account was disabled.
4726 A user account was deleted.
4728 A member was added to a security-enabled global group.
4731 A security-enabled local group was created.
4732 A member was added to a security-enabled local group.
4740 A user account was locked out.
4743 A computer account was deleted.
4767 A user account was unlocked.

This is is simple Windows dashboard example you can produce from the security events



Good Splunk Links

Placeholder Image

Here are some useful Splunk links

[Splunk Apps and TA’s]

[Splunk Disk Capacity Sizing]

[Splunk Forum (Great place for questions and answers)]

[Splunk data on boarding guide – good for understanding about data]

[Splunk wiki]

[Splunk Reference]

[Splunk Quick Command Reference]

[Splunk hot/cold/warm data process]

[Data Onboarding Cheet Sheet]

“Lookups NOT Hookups”

Placeholder Image

I recently had to a do a 5 minute presentation on my Splunk ES course, I chose lookups as the subject, so I thought I ‘d add this to my blog.

“lookups NOT hookups” (was my presentation title)

Anyway, this simple article will show you how to set up a simple lookup within Splunk.

The search is for an Windows event id which runs a lookup function on the filed name “Account_Name” and if a match is found in the lookpup, the fields specified in the OUTPUT function will be displayed in the fields display in Splunk, from this you can create dashboards tables etcs.

Time to get some data in!!! –

Configure some UF’s on Windows servers, and ingest security logs – see the security essentials app it will help you get the data in.

Tasks: (After Ingesting the Windows logs)

a. Run a search and check the fields – note the fields of interest

index=wineventlog (type your index details)

Look for the Account_Name this field contains the windows domain name and is of interest.

b. Create lookup csv file (notice the Account_Name field), this must match the
csv file headers.
Lookup Example csv file


Add the csv file to Splunk / lookups / define the lookup name , then run the command to see the data from the csv file

| inputslookup <csv filename> (If you can see the data move onto to the next step)

c. Run the below search, which will add names once its Account_Name is matched and is disabled.

Before you run the serach disable one of the accounts in the csv file and the event 4738 should be generated in Windows(im using this as an example)

index=wineventlog sourcetype=WinEventLog:Security EventCode=4738 “Account Disabled”
| lookup dc_ad_accounts Account_Name OUTPUT first_name, last_name, account_status

you should see new fields first_name and last_name and they should contain the values frojm the csv file

Now Add a table / dashboard

index=wineventlog sourcetype=WinEventLog:Security EventCode=4738 “Account Disabled”
| lookup dc_ad_accounts Account_Name OUTPUT first_name, last_name, account_status
| table _time, first_name, last_name, account_status



Placeholder Image

As I tinkered with Splunk and ElasticSearch, so I thought I’d have a play with Arcsight Logger as well, I didn’t want to leave the third amigo in my SIEM blog out .

So another good logging tool, the install was based on 6.4 and RHEL7.3, the install was easy enough, you had to make some OS config changes and set up firewall rules or disable it for lab work. The install took around 10 minutes, again this is just a lab server, for the enterprise you would have to run through a performance, sizing, and storage exercise. You would also need to work the average EPS (events per second, this is based on your devices)

Logger comes with over 300 connectors, these are the components which collect the data from many different devices and send it via CEF (common event format – industry standard) or RAW to the logger. Most of the connectors are security based, but there some for Operations use cases. The collection of custom data can also be created using flex connectors. The data is normalised, this makes it easier to analyse the data, but raw data for compliance can also be presented should it need to be. All the data is indexed and compressed.

Logger comes as an appliance or software (appliances might be better for smaller companies)

So I collected a few logs locally, using the gui I was able to add the sources.




Once you save and enable the receiver, you should see the name on the main page, if you click on the name you will see the below raw data from the /var/log/secure log file.

If you then type

“Failed password” you should get a similar chart as below.



Another simple example of searching for data is like search command below, this shows, the total event count for each hour.

deviceVendor != “Test” | chart sum(baseEventCount) as Event_Count span=1h


I like logger, it’s a solid tool, the gui is great its easy to move around, there are many connectors to choose from, and the data once ingested can be forwarded to Operations Bridge Analytics (OBA), which is a cool analytics engine and other Arcsight managers such as (ESM). So in conclusion, have a play with each logging tool in a POC, see what works for you, remember put your admin hat on and think devops, security, ease of use and configuration, and don’t forget to look at the license costs, they all have different models, so you will need to run through an exercise on costs once you know what you’re going to monitor and what the architecture is going to be, a distributed architecture for the enterprise will add to the cost in terms of setup, license and maintenance. Define your use cases and map these to the technology, don’t collect every log, ensure the logs are good quality ISO 8601 standards, have a solid infrastructure in place, as this data will soon grow very large.