SIEM / ARCSIGHT LOGGER NOTES

Placeholder Image

As I tinkered with Splunk and ElasticSearch, so I thought I’d have a play with Arcsight Logger as well, I didn’t want to leave the third amigo in my SIEM blog out .

So another good logging tool, the install was based on 6.4 and RHEL7.3, the install was easy enough, you had to make some OS config changes and set up firewall rules or disable it for lab work. The install took around 10 minutes, again this is just a lab server, for the enterprise you would have to run through a performance, sizing, and storage exercise. You would also need to work the average EPS (events per second, this is based on your devices)

Logger comes with over 300 connectors, these are the components which collect the data from many different devices and send it via CEF (common event format – industry standard) or RAW to the logger. Most of the connectors are security based, but there some for Operations use cases. The collection of custom data can also be created using flex connectors. The data is normalised, this makes it easier to analyse the data, but raw data for compliance can also be presented should it need to be. All the data is indexed and compressed.

Logger comes as an appliance or software (appliances might be better for smaller companies)

So I collected a few logs locally, using the gui I was able to add the sources.

/var/log/secure

/var/log/messages

log1

Once you save and enable the receiver, you should see the name on the main page, if you click on the name you will see the below raw data from the /var/log/secure log file.

If you then type

“Failed password” you should get a similar chart as below.

log2

 

Another simple example of searching for data is like search command below, this shows, the total event count for each hour.

deviceVendor != “Test” | chart sum(baseEventCount) as Event_Count span=1h

log3

I like logger, it’s a solid tool, the gui is great its easy to move around, there are many connectors to choose from, and the data once ingested can be forwarded to Operations Bridge Analytics (OBA), which is a cool analytics engine and other Arcsight managers such as (ESM). So in conclusion, have a play with each logging tool in a POC, see what works for you, remember put your admin hat on and think devops, security, ease of use and configuration, and don’t forget to look at the license costs, they all have different models, so you will need to run through an exercise on costs once you know what you’re going to monitor and what the architecture is going to be, a distributed architecture for the enterprise will add to the cost in terms of setup, license and maintenance. Define your use cases and map these to the technology, don’t collect every log, ensure the logs are good quality ISO 8601 standards, have a solid infrastructure in place, as this data will soon grow very large.

Monitoring Tools Changes

Placeholder Image

Over last 15 years I have been tinkering with some excellent monitoring software tools, these alongside other types of software are today known as monolithic software. I recently heard this term when I was doing docker training, how times have changed, it made me feel so somewhat outdated, which I am ….not,  and thus I started to research the next generation of tools, and figure out why are they so hip and trendy, here’s few that I have recently read about, and tinkered with a few of them.

  • AWS – Cloudwatch (cloud monitor)
  • Google StackDriver (cloud monitor)
  • Azure Monitor (cloud monitor)
  • Riverbed (apm)
  • App Dynamics (apm)
  • Splunk (log/siem)
  • Elastic (log/siem)
  • LogRythm (log/siem)
  • New Relic (apm)
  • Solarwinds (network)

These tools are a mix of classic and open source which can be deployed in the cloud and on premise. Some of the key findings for me were, alongside the classic way of deployment, they offer cloud based monitoring, API services, deployment of the tool in the cloud, which makes them more agile than the monolithic tools, you don’t need a big army of people to deploy, configure and maintain them, that said it does depend on the size of the organizations requirements,  so you still have to perform an environment analyses before you choose your tool, this is sometimes a painful process, but if you do all the upfront analysis work and define a monitoring strategy, you will have a good outcome.

Each tool has its pros and cons, the main difference today is that for the devops world which is what IT departments use as a practise, need agile ways of monitoring due to the speed of everything, and continuous integration, so time to deploy, configure and use is a key requirement. What used to take many months to provision servers, software and then deliver, can now be done within hours for some use cases in the cloud.

The next generation of monitoring tools which are in the limelight offer dashboards, reporting, time series charts, anomaly detection (machine learning), alerts, application diagnostics, network sniffing, metric correlation, logging of raw logs (unstructured and structure data) and do them faster and make use of API’s. Look out for these features and map them to your use cases to ensure they meet the requirements.

The one new standout feature that is part of some of the tools, is the machine learning aspect, its a new way of monitoring, it’s trying to find the needle in the haystack due to the large volumes of data which is impossible for humans to trawl through, so for logs analysis, anomaly detection is used and it’s a great feature. Another way of getting to the data you want, is to use the search queries against logs to provide useful insights, some of these tools provide a great way to get the data ingested.

Monitoring start-ups are using cloud (AWS), containers, and open source components to build some of these new monitoring tools and they provide good and fast monitoring services, it’s also worth seeing if these meet your requirements, but keep security in mind as sometimes the on premise tools might be better suited due to the risk of your infrastructure being compromised.

Monitoring has and always will be a big challenge, trying to monitor the full IT stack is very difficult task to do, in the past some software projects would fail to even put monitoring in to the project, today it should be a standard process. I go by the principle that it’s never going be 100% perfect as I have learned over the years, if you get 80% of it right, (80/20 rule) then you are winning.

A Typical IT Monitoring Stack

  • End User (synthetic monitoring, web page & protocol use and performance, reports)
  • Applications (Metrics Performance / Custom), SQL Queries)
  • On-Premise – Infrastructure (performance and availability, metrics, logging, logs, security, hardware)
  • Cloud – (service, compute, database, capacity, scaling, API, reports)

With this stack in mind, consider do you really need every metric and web application monitored? do you really need an agent on every server? do you really need every event or log? (Unless required for compliance), these are a few things to think about, control the monitoring environment is key, or it will get out of hand very quickly.

Here‘s a few tips on getting the right monitoring outcome:

  • Work on your monitoring technical business goals
  • Identify the functional and non-functional requirements (ensure its future proof API oriented, can perform, ease of use, and scalable for the enterprise)
  • Architect and design the solution (HLD/LLD)
  • Plan the build activities
  • Test the solution
  • Develop tools skills
  • Create standards
  • Deploy the solution
  • Grow the solution, don’t use all the features, but exploit them time
  • Configure as much automation as possible
  • Maintain the solution

No one tool or tools are going to give you the magic solution you want, having a mix of tools (best of breed) can become a nightmare to manage and integrate, that said with Rest API services it’s becoming easier, having a one stop shop platform can lead to vendor lock in, and sometimes be cumbersome to use, but on the plus side they give you a complete framework that all the teams (network/apps/db/os/dev) can eventually learn and exploit to their advantages. Sometimes it’s better to use complete framework and sometimes it’s better to use best of breed.

The one thing I would suggest is look at how easy it is to use and maintain the tool, if you’re fighting with the tool to get it to do what you want in a timely manner, it’s not going to be a happy relationship….use the proof of concept stages to validate the solution, this may cost a little more upfront but in the longer can save you a lot in terms of costs and headaches.

Security Monitoring / Kali / Nmap / Hacking

Placeholder ImageOne of my ISP email accounts got hacked not long ago, they managed use the smtp server to send spam emails, but I detected it in time, and shut the account down as it wasn’t a key account. The ISP said it was a brute force attack.

I also had a telephone call from someone pretending to be from Microsoft, they said I had a virus and that I needed to install TeamViewer so he could check my PC…to cut a very long story short, I kept him on the phone for 20 minutes, and pretended to not know how to use the PC properly, he was trying to get me to run eventvwr to show me logs, but I kept making typing mistakes, which was making him so vex, I had to hold myself back in laughing, in the end I taught him a lesson and he put the phone down. A few weeks after this incident I heard this news – http://www.bbc.co.uk/news/technology-40430048

With all the cyber-attacks and the fact if it’s got an OS on it can be hacked, it was time to check all those OS based devices on my home network, from Routers, TV’s,PC’s, Laptop’s , Phone’s and all those internet of thing devices, the device list is ever growing, so you need to monitor and keep on a check on them all.

After my two incidents, I thought I’d use nmap its part of the Kali penetration tool kit, https://www.kali.org to check my home network, so I installed it on very small SD drive and booted from the live version and ran nmap, it was fast and easy. The toolkit contains many tools and can be overwhelming, but nmap is a good place to start the checks.

IMG_8488
There are many nmap commands, but the one below will get you started. These are common ports the hacker’s tend to target.

Run the below command from the terminal and you should get a list of the different OS’s, devices, and port details, its good way to check what you have on your network and once you know, you can take action to use firewalls and upgrade your OS’s.
nmap -p 20,21,22,23,25,53,80,110,135,137,138,139,161,443,512,513,514,1433,3306,1521,5432,8080 192.168.0.* (CHANGE YOUR HOME SUBNET ADDRESS IF DIFFERENT)

It should look something like this.
Starting Nmap 7.60 ( https://nmap.org ) at 2017-07-25 19:24 UTC
Nmap scan report for 192.168.0.1
Host is up (0.015s latency).
PORT STATE SERVICE
20/tcp filtered ftp-data
21/tcp filtered ftp
22/tcp filtered ssh
23/tcp closed telnet
25/tcp filtered smtp
53/tcp filtered domain
80/tcp open http
110/tcp filtered pop3
135/tcp filtered msrpc
137/tcp filtered netbios-ns
138/tcp filtered netbios-dgm
139/tcp filtered netbios-ssn
161/tcp filtered snmp
443/tcp closed https
512/tcp filtered exec
513/tcp filtered login
514/tcp filtered shell
1433/tcp filtered ms-sql-s
1521/tcp filtered oracle
3306/tcp filtered mysql
8080/tcp closed http-proxy
MAC Address: 22:E1:3A:BE:31:1A (your router)

From this list you can take the various actions, such as firewall blocks, os-patch updates etc.
There are many tips on the net on how to be more secure, here’s a few a key one’s for your home network that I use.

• Make sure you router passphrase is secure and uses WPA2
• Change default passwords of the router (make it a complex password – like JamEsBr0wn1977!£, this is a very strong password to crack see https://www2.open.ac.uk/openlearn/password_check/index.html , and change the name of the default SSID (this can tell the hacker what type of router you have, therefore they know the default password)
• Keep your OS updated
• Use firewalls and AV tools, and don’t open unsolicited emails!
• Disable the PIN enable access – not many people use it and with a brute force attack it’s an easy hack.
• If anyone calls and says there’s a virus on your pc and that they are from Microsoft or any other company, just keep them hanging on the line like I did and have fun at their expense, they will soon put the phone down.

If you don’t want to use nmap, and just want to check ports, you can use the script content below from github, it’s nothing like the feature rich nmap utility, but it will scan your IP device and tell you which ports are open, I created this a while ago because I didnt want to use any thirdparty software on a production enviroment and some basic perl code made it easy to check the ports

https://github.com/iopsmon/port_scan

 

 

Agile / Waterfall / Monitoring

 

agileandwaterfall

Agile and Waterfall for monitoring solutions, these methods have been around for many years and are two different approaches when it comes to delivering IT projects.

I have worked on many systems and application monitoring projects and found for software that is out of the box or shrink-wrapped, it’s better to use the waterfall methodology that the agile, this is because the functional, non-functional and features required can be defined during the design phase and then delivered upon as expected.

This methodology  seemed to work much better than using the agile methodology as the requirement and sprints just seemed to get out of control due to unreasonable demands for feature rich requirement and capabilities upfront, this just seem to hamper the build and delivery process, and thus the requirements could not be met in the time, perhaps given longer time they could have been, and I believe this is to do with the fact that monitoring solutions for the enterprise require many years to become mature, you start of small and build in the key features and functions, then build up the solution and tailor it as to how you want.  Agile is suited to devops and software development lifecycles, for out of the box software I would choose waterfall as this tends to be better in my opinion.

opcmsg test script

Placeholder Image

Although OM is ending its life and the new platform is OMi is taking its place, this script is still useful during OM to OMi migraiton work and can be used when you want to send test events from OML to OMi

It will generate opcmsg events (for operations manager / Linux only), if message storm is operational on the OML server, then it may well stop the events after a number of events as its thinks its a message storm. So it can act as a test for messages and storms.

you will need to create a opcmsg policy and add the application, object and message group or deploy one with no conditions for test purposes.

oml_gui

================================================================

#!/bin/bash
#Script to generate test opcmsg

date
cnt=1
for (( i=1; i <= 10; i++ ))
do
/opt/OV/bin/opcmsg severity=normal application=tstmsg object=tstmsg msg_grp=UAT msg_text=”Test message $cnt” &
/opt/OV/bin/opcmsg severity=warning application=tstmsg object=tstmsg msg_grp=UAT msg_text=”Test message $cnt” &
/opt/OV/bin/opcmsg severity=minor application=tstmsg object=tstmsg msg_grp=UAT msg_text=”Test message $cnt” &
/opt/OV/bin/opcmsg severity=major application=tstmsg object=tstmsg msg_grp=UAT msg_text=”Test message $cnt” &
/opt/OV/bin/opcmsg severity=critical application=tstmsg object=tstmsg msg_grp=UAT msg_text=”Test message $cnt” &
let cnt=cnt+1

sleep 3
done
date

================================================================