Splunk / Python Script / Syslog Demo Data

Placeholder Image

Officially now a Splunk Certified User: (Splunker!)

bg1

With that in mind I thought I’d create a demo script to load some log data into Splunk, this is to show the data and some charts.

As the data is based on time series (Time Stamped) you’ll be able to see information from the log entries and a few charts.

The use case could be to show all syslog data from all pre-defined critical servers, then run searches against the data for particular log events / messages.

A typical search could be “Failed from ip_address”

I created a python script, which generates a log file called dc_security.log, the script has two options, one to quickly generate data or one with some delays so you can leave it for a while to build the data and show this over time in Splunk. The log data is based on syslog format and have put various messages in the log events.

Example Data:

Dec 2 02:51:31 LINUX_SRV3 user joker has tried to login to this server and failed from ip_address 70.70.70.21

After you download the script run the script – sudo python ./dc_security_v1.0.py and select either option.

Download the script from Github

https://github.com/iopsmon/log_data_generator

sp1

After the file has been created, copy the file: dc_security.log to a server from where you can access Splunk web gui and upload the data into an index called dcsecurity or an index of your choice.

Splunk > Add Data > Upload Data > drag the file into the target box > next

sp2

Set the source type – as the log file is like syslog entries, choose > Operating systems > linux_message_syslog

Press Next > Review > Submit > Start Searching

From the search type the below query and you should get a display of all the logs from that file

sourcetype=linux_messages_syslog

sp3.jpg

Here’s a few charts based on the data

Search 1: This searches for the message failed AND from ipaddress:

sourcetype=linux_messages_syslog failed AND from ip_address

sp4

 

Search 2 : This search query shows which server has had the most failed login attempts You can see there are 3500 events, this could be an IOC (Indicator Of  Compromise) like a brute force attack.

It’s also showing the source of the IP, based on this you would run your security process and actions.

sourcetype=linux_messages_syslog failed AND from ip_address | top host

sp5

Hopefully this shows you how to get some log data into Splunk, then run some quick searches, and create some charts for insights.

In a production environment you would use forwarders to collect and forward the syslog data on an ongoing basis.

 

SIEM / ARCSIGHT LOGGER NOTES

Placeholder Image

As I tinkered with Splunk and ElasticSearch, so I thought I’d have a play with Arcsight Logger as well, I didn’t want to leave the third amigo in my SIEM blog out .

So another good logging tool, the install was based on 6.4 and RHEL7.3, the install was easy enough, you had to make some OS config changes and set up firewall rules or disable it for lab work. The install took around 10 minutes, again this is just a lab server, for the enterprise you would have to run through a performance, sizing, and storage exercise. You would also need to work the average EPS (events per second, this is based on your devices)

Logger comes with over 300 connectors, these are the components which collect the data from many different devices and send it via CEF (common event format – industry standard) or RAW to the logger. Most of the connectors are security based, but there some for Operations use cases. The collection of custom data can also be created using flex connectors. The data is normalised, this makes it easier to analyse the data, but raw data for compliance can also be presented should it need to be. All the data is indexed and compressed.

Logger comes as an appliance or software (appliances might be better for smaller companies)

So I collected a few logs locally, using the gui I was able to add the sources.

/var/log/secure

/var/log/messages

log1

Once you save and enable the receiver, you should see the name on the main page, if you click on the name you will see the below raw data from the /var/log/secure log file.

If you then type

“Failed password” you should get a similar chart as below.

log2

 

Another simple example of searching for data is like search command below, this shows, the total event count for each hour.

deviceVendor != “Test” | chart sum(baseEventCount) as Event_Count span=1h

log3

I like logger, it’s a solid tool, the gui is great its easy to move around, there are many connectors to choose from, and the data once ingested can be forwarded to Operations Bridge Analytics (OBA), which is a cool analytics engine and other Arcsight managers such as (ESM). So in conclusion, have a play with each logging tool in a POC, see what works for you, remember put your admin hat on and think devops, security, ease of use and configuration, and don’t forget to look at the license costs, they all have different models, so you will need to run through an exercise on costs once you know what you’re going to monitor and what the architecture is going to be, a distributed architecture for the enterprise will add to the cost in terms of setup, license and maintenance. Define your use cases and map these to the technology, don’t collect every log, ensure the logs are good quality ISO 8601 standards, have a solid infrastructure in place, as this data will soon grow very large.

SIEM / SPLUNK / ELASTICSEARCH NOTES

Placeholder Image

SIEM (Security Information and Event Management), this is another area of monitoring that has come into the foreground of monitoring, with all the hacks, this should be the top priority for all organisations.

The aim here is to consolidate logs from many different sources in real time, this is typically (structured /unstructured, firewall, snmp, dns, switch, router, ldap, ids, apache/iis, database and application logs), once this data is received and ingested into the tools engine, it is normalised or index it in such a way that you can run searches, create time series charts, correlate, retain the data for compliance, alert all in real time, these are some of the features and they work really well and fast at it. The reason it’s so fast is that they don’t use the traditional database schemes for accessing the data you want, the datasets tends to be on files which are compressed and indexed and thus the tools can search the data, a bit like googling.

Data is typically collected via agents which come in one form or another, regex can be used to parse the data or the rawdata can also be ingested.  As the time stamp is typically recorded on the logs, it’s often use as part of the time series charts.

I started to have a look at two SIEM solutions, as they seem to be the ones many people talk about these days, but there are many other logging solutions and they are not for just SIEM, you can also get business related data, such as how many users bought this product from logs.

I wanted to get a feel for them, and find out how easy or complex they were to use and present some data. This is not a full on review, but I always try to put on my admin hat from many years ago to see how it works under the hood and if I would take to it.

Basic Assessment Criteria:

  • Easy of install and config
  • Easy of getting data into the engine ( Apache / Security logs is what I looked into)
  • Easy of presenting the data
  • Documentation Quality

Both on RHEL6.5

Splunk 7

Install and config on my RHEL65 server was easy, I configured a forwarder to point to the logs I wanted to and point it to the Splunk server which was doing all the collecting, indexing and parseing, it has a feature which will detect which type of data source you are from the logs.  Once the data was in I could run my searches, the data source was linux_secure

This simple query shows the data from the /var/log/messages log file

index=ops_security_idx* “Failed password”

 splunk1

This simple query shows the data in a chart (I know it’s only from server, but if you a thousand servers you could scan the data in a short time and find which are the top ones are having logging failure, this could show which servers are the ones someone is trying to get into, simple but very effective)

index=ops_security_idx* “Failed password” | top host  (this uses my index called ops_security and searches for the string “Failed password” then pipes it to the function top which then uses the host field.

splunk2

Elasticsearch

Install and config on my RHEL65 server was fairly easy, I had to download a number of rpm files (elasticsearch/logstash/kibana/filebeats) and set up a few dependencies. Once installed I had to use a number of configuring files to get the component’s working, this included the filebeats (agent) to get some data into the elasticsearch engine.

Once the data was in I could run my searches, the data source was linux_secure

elastic1

Both SIEM tools are good, but in today fast paced world of monitoring, if you want something that fits into the devops mode of operation and get value quickly, then Splunk would be my first choice, if you have time for set up and configuration then Elastic would be just fine, both are good tools.  They had good documentation and forum support, they have free versions with limitations. I found Splunk to be easier to install and configure in my lab but for the enterprise you would need to consider many other variables. They are many add-ons via apps and components to enhance the solution to your needs and a lot of good knowledge in the community.

At the enterprise both tools would need to scale for your use cases so a good performance and sizing exercise would need to be carried out as the volume of data ingested will start to grow very quickly, and if the infrastructure is not fit for purpose it can lead to issues in the long run. The infrastructure that supports these tools needs to be future proofed and be easy to scale out should you need to.

Both had cloud (SaaS) based offerings and easy to set up for a trail period.

After my tinkering, I realised by applying the same project principles as I did for systems and application monitoring projects, you can successfully implement SIEM based tools by following a few key steps.

  1. Analyse from all the organisations stake holders what log data they would like to use for log analysis and define the use cases.
  2. Create a charter as to what business, technical, functional and non-functional requirements are and define the logging strategy.
  3. Select the top SIEM playes on the market – Gartner has good articles on this, then run a POC.
  4. Based on the POC create a design (lld/hld), define the infrastructure, data sources, web, security logs etc, don’t log everything, only data you’re interested in,
  5. Train and develop skills for admins and users.
  6. Deploy as per the design in phased manner. Phase 1 should not include everything, only use the critical features functions required, but add more over time, and grow the solution, this way it will be a mature solution.
  7. Govern and maintain the solution – speed is everything for logging solution, so ensure the infrastructure is performing as it should, and retire old configurations and apps that are not in use. Don’t leave it to sort itself out, if you no longer monitor various components, ensure that data is not being collected and keep it up-to-date.

Monitoring Tools Changes

Placeholder Image

Over last 15 years I have been tinkering with some excellent monitoring software tools, these alongside other types of software are today known as monolithic software. I recently heard this term when I was doing docker training, how times have changed, it made me feel so somewhat outdated, which I am ….not,  and thus I started to research the next generation of tools, and figure out why are they so hip and trendy, here’s few that I have recently read about, and tinkered with a few of them.

  • AWS – Cloudwatch (cloud monitor)
  • Google StackDriver (cloud monitor)
  • Azure Monitor (cloud monitor)
  • Riverbed (apm)
  • App Dynamics (apm)
  • Splunk (log/siem)
  • Elastic (log/siem)
  • LogRythm (log/siem)
  • New Relic (apm)
  • Solarwinds (network)

These tools are a mix of classic and open source which can be deployed in the cloud and on premise. Some of the key findings for me were, alongside the classic way of deployment, they offer cloud based monitoring, API services, deployment of the tool in the cloud, which makes them more agile than the monolithic tools, you don’t need a big army of people to deploy, configure and maintain them, that said it does depend on the size of the organizations requirements,  so you still have to perform an environment analyses before you choose your tool, this is sometimes a painful process, but if you do all the upfront analysis work and define a monitoring strategy, you will have a good outcome.

Each tool has its pros and cons, the main difference today is that for the devops world which is what IT departments use as a practise, need agile ways of monitoring due to the speed of everything, and continuous integration, so time to deploy, configure and use is a key requirement. What used to take many months to provision servers, software and then deliver, can now be done within hours for some use cases in the cloud.

The next generation of monitoring tools which are in the limelight offer dashboards, reporting, time series charts, anomaly detection (machine learning), alerts, application diagnostics, network sniffing, metric correlation, logging of raw logs (unstructured and structure data) and do them faster and make use of API’s. Look out for these features and map them to your use cases to ensure they meet the requirements.

The one new standout feature that is part of some of the tools, is the machine learning aspect, its a new way of monitoring, it’s trying to find the needle in the haystack due to the large volumes of data which is impossible for humans to trawl through, so for logs analysis, anomaly detection is used and it’s a great feature. Another way of getting to the data you want, is to use the search queries against logs to provide useful insights, some of these tools provide a great way to get the data ingested.

Monitoring start-ups are using cloud (AWS), containers, and open source components to build some of these new monitoring tools and they provide good and fast monitoring services, it’s also worth seeing if these meet your requirements, but keep security in mind as sometimes the on premise tools might be better suited due to the risk of your infrastructure being compromised.

Monitoring has and always will be a big challenge, trying to monitor the full IT stack is very difficult task to do, in the past some software projects would fail to even put monitoring in to the project, today it should be a standard process. I go by the principle that it’s never going be 100% perfect as I have learned over the years, if you get 80% of it right, (80/20 rule) then you are winning.

A Typical IT Monitoring Stack

  • End User (synthetic monitoring, web page & protocol use and performance, reports)
  • Applications (Metrics Performance / Custom), SQL Queries)
  • On-Premise – Infrastructure (performance and availability, metrics, logging, logs, security, hardware)
  • Cloud – (service, compute, database, capacity, scaling, API, reports)

With this stack in mind, consider do you really need every metric and web application monitored? do you really need an agent on every server? do you really need every event or log? (Unless required for compliance), these are a few things to think about, control the monitoring environment is key, or it will get out of hand very quickly.

Here‘s a few tips on getting the right monitoring outcome:

  • Work on your monitoring technical business goals
  • Identify the functional and non-functional requirements (ensure its future proof API oriented, can perform, ease of use, and scalable for the enterprise)
  • Architect and design the solution (HLD/LLD)
  • Plan the build activities
  • Test the solution
  • Develop tools skills
  • Create standards
  • Deploy the solution
  • Grow the solution, don’t use all the features, but exploit them time
  • Configure as much automation as possible
  • Maintain the solution

No one tool or tools are going to give you the magic solution you want, having a mix of tools (best of breed) can become a nightmare to manage and integrate, that said with Rest API services it’s becoming easier, having a one stop shop platform can lead to vendor lock in, and sometimes be cumbersome to use, but on the plus side they give you a complete framework that all the teams (network/apps/db/os/dev) can eventually learn and exploit to their advantages. Sometimes it’s better to use complete framework and sometimes it’s better to use best of breed.

The one thing I would suggest is look at how easy it is to use and maintain the tool, if you’re fighting with the tool to get it to do what you want in a timely manner, it’s not going to be a happy relationship….use the proof of concept stages to validate the solution, this may cost a little more upfront but in the longer can save you a lot in terms of costs and headaches.

Security Monitoring / Kali / Nmap / Hacking

Placeholder ImageOne of my ISP email accounts got hacked not long ago, they managed use the smtp server to send spam emails, but I detected it in time, and shut the account down as it wasn’t a key account. The ISP said it was a brute force attack.

I also had a telephone call from someone pretending to be from Microsoft, they said I had a virus and that I needed to install TeamViewer so he could check my PC…to cut a very long story short, I kept him on the phone for 20 minutes, and pretended to not know how to use the PC properly, he was trying to get me to run eventvwr to show me logs, but I kept making typing mistakes, which was making him so vex, I had to hold myself back in laughing, in the end I taught him a lesson and he put the phone down. A few weeks after this incident I heard this news – http://www.bbc.co.uk/news/technology-40430048

With all the cyber-attacks and the fact if it’s got an OS on it can be hacked, it was time to check all those OS based devices on my home network, from Routers, TV’s,PC’s, Laptop’s , Phone’s and all those internet of thing devices, the device list is ever growing, so you need to monitor and keep on a check on them all.

After my two incidents, I thought I’d use nmap its part of the Kali penetration tool kit, https://www.kali.org to check my home network, so I installed it on very small SD drive and booted from the live version and ran nmap, it was fast and easy. The toolkit contains many tools and can be overwhelming, but nmap is a good place to start the checks.

IMG_8488
There are many nmap commands, but the one below will get you started. These are common ports the hacker’s tend to target.

Run the below command from the terminal and you should get a list of the different OS’s, devices, and port details, its good way to check what you have on your network and once you know, you can take action to use firewalls and upgrade your OS’s.
nmap -p 20,21,22,23,25,53,80,110,135,137,138,139,161,443,512,513,514,1433,3306,1521,5432,8080 192.168.0.* (CHANGE YOUR HOME SUBNET ADDRESS IF DIFFERENT)

It should look something like this.
Starting Nmap 7.60 ( https://nmap.org ) at 2017-07-25 19:24 UTC
Nmap scan report for 192.168.0.1
Host is up (0.015s latency).
PORT STATE SERVICE
20/tcp filtered ftp-data
21/tcp filtered ftp
22/tcp filtered ssh
23/tcp closed telnet
25/tcp filtered smtp
53/tcp filtered domain
80/tcp open http
110/tcp filtered pop3
135/tcp filtered msrpc
137/tcp filtered netbios-ns
138/tcp filtered netbios-dgm
139/tcp filtered netbios-ssn
161/tcp filtered snmp
443/tcp closed https
512/tcp filtered exec
513/tcp filtered login
514/tcp filtered shell
1433/tcp filtered ms-sql-s
1521/tcp filtered oracle
3306/tcp filtered mysql
8080/tcp closed http-proxy
MAC Address: 22:E1:3A:BE:31:1A (your router)

From this list you can take the various actions, such as firewall blocks, os-patch updates etc.
There are many tips on the net on how to be more secure, here’s a few a key one’s for your home network that I use.

• Make sure you router passphrase is secure and uses WPA2
• Change default passwords of the router (make it a complex password – like JamEsBr0wn1977!£, this is a very strong password to crack see https://www2.open.ac.uk/openlearn/password_check/index.html , and change the name of the default SSID (this can tell the hacker what type of router you have, therefore they know the default password)
• Keep your OS updated
• Use firewalls and AV tools, and don’t open unsolicited emails!
• Disable the PIN enable access – not many people use it and with a brute force attack it’s an easy hack.
• If anyone calls and says there’s a virus on your pc and that they are from Microsoft or any other company, just keep them hanging on the line like I did and have fun at their expense, they will soon put the phone down.

If you don’t want to use nmap, and just want to check ports, you can use the script content below from github, it’s nothing like the feature rich nmap utility, but it will scan your IP device and tell you which ports are open, I created this a while ago because I didnt want to use any thirdparty software on a production enviroment and some basic perl code made it easy to check the ports

https://github.com/iopsmon/port_scan