Showing posts with label Alert. Show all posts
Showing posts with label Alert. Show all posts

Wednesday, April 9, 2014

Detecting OpenSSL version data in Splunk

I won't go into the HeartBleed details as you likely already know them. From a Splunk perspective there are any number of ways to try to get your arms around this issue but are highly dependent on the types of data you are collecting. That said, if you are using the Splunk Linux TA and have the package script enabled and/or the Windows TAs and have the InstalledApps_Windows script turned on you could use the following queries to extract the OpenSSL version. You could also combine the queries but for the purposes of posting them here that would make it harder to read /shrug. Obviously adjust based on changes you've made (ie sourcetype)

Linux
sourcetype=package  | multikv  | search NAME=openssl  | dedup host ARCH  | eval HBconcern = case(match(VERSION,"(^0\.\d\.\d|^1\.0\.0)"), "Too Low", match(VERSION,"^A"), "HP (Not familiar)", match(VERSION,"^1\.0\.1[a-f]"), "Potentially Susceptible", match(VERSION,"^1\.0\.1[g-z]"), "Patched", match(VERSION,"^1\.0\.2-beta"), "Potentially Susceptible", 1=1, "fixme") | table host NAME VENDOR GROUP VERSION ARCH Hbconcern | sort host

Windows
sourcetype=InstalledApps_Windows DisplayName=openssl | rex  "DisplayName=OpenSSL\s+(?<VERSION>\S+)\s+\((?<ARCH>[^\)]+)"| dedup host ARCH | eval HBconcern = case(match(VERSION,"(^0\.\d\.\d|^1\.0\.0)"), "Too Low", match(VERSION,"^A"), "HP (Not familiar)", match(VERSION,"^1\.0\.1[a-f]"), "Potentially Susceptible", match(VERSION,"^1\.0\.1[g-z]"), "Patched", match(VERSION,"^1\.0\.2-beta"), "Potentially Susceptible", 1=1, "fixme") | table host VERSION ARCH HBconcern | sort host

What I'm not sure on is the regex for the 1.0.2-beta as I haven't actually seen that version installed. I'm guessing it shows up like that. 

Friday, September 27, 2013

I want more time to play!

I find myself in a somewhat strange place today where because I'm going to be at the Splunk conference next week I don't have much scheduled that needs to be done (or staged to be done this weekend). This reminds me of a line that has come up a few times as we've been going through the interview and candidate selection process for two open slots we have in the office. We have all been working way too many hours and want some 'free time' back in our normal routine. I'm not talking about a mental health break or time away from the office as much as having a pocket or two of time where we can explore/investigate/work on little side projects/quality-of-life-things that need to be done. They, generally speaking, aren't hard or long things to do but get sidelined because of higher priorities. 

So I'm monkeying around with a few things in Splunk and two rabbit holes later come up with a query that quite frankly doesn't return a whole lot of hits for me over the last month. What it DOES show is a server that wasn't able to install some config packages I was pushing from my deployment server.

index=_internal source=*metrics.log component="DeploymentMetrics" status="failed" | stats max(_time) as time by hostname event scName appName fqname | convert ctime(time)

This event is created on your deployment server. Not sure what fqname stands for exactly but in my case it was showing me the path the server was trying to install the app to (fully qualified path name is where my mind goes but doesn't fit the data). scName is likely server class name and appName is obviously the app itself - both are references to your serverclass.conf file contents. With over 1k agents deployed the fact that this found issues with only 1 server is pretty cool I suppose. Will likely bake this into the app I'll never create re: first paragraph =)


Tuesday, June 26, 2012

Statistical vs Rule based Threat Detection

A number of different discussions have led me to think about the difference between log management and SIEM when it comes to their use and play in threat detection. Of the many items that could be discussed what I came to is the difference between statistical and rule based threat detection.

An oft used analogy, even referenced in the Verizon DBIR, is the difference between looking at haystacks and needles in haystacks. A statistical detection methodology might be to review the top N of X activity within Y timeframe. The point of this exercise is twofold. The first is simply to look at the “curve” of the numbers involved in the top 5 relative to the top 10 relative to the top 30 or just a spike in a line graph showing the volume of logs collected. All of which might indicate something has happened and might be worth diving deep on. The second is to help dial in your tools by whitelisting systems performing normal activity. If you are looking for outbound SMTP traffic sorted by volume in a day, you should be able to easily spot your email gateways. Whitelist them and the next time the report is run the top of the list might contain compromised systems. By and large many log management systems should be able to accomplish this sort of activity.

At some point though you will want to focus in on specific threat activity. Take today’s SANs Diary update on Run Forest Run or Sality, Tidserv, whatever. In this case you have specific information and want to receive an alert when one of your internal systems hits an IP, URL, or multi-step pattern of activity. This is your rule based needle finding capability. Generally speaking this requires a rule engine of varying level of sophistication located within some point solution or product. Statistical detection won’t really be able to get you this.

The challenge is many point solutions, by design or omission, aren’t able to factor in the larger view in reviewing the needles found. MSSPs are notorious (too strong?) for this but then so are things like IPS. “We saw this and this so we wrapped it up in a pretty bow for you” ….great, but I need more context. The SIEM technology space, in general, was supposed to fill this gap. Not only can you develop specific rules to find needles in the overall stream of log consciousness but to a greater or lesser extent, based on vendor/tool/administrator, use them in a more statistical way. I think a lot of “Mah SIEM sux” mentality comes from how you approach your SIEM relative to this overall issue. That is a rabbit hole that I don’t want to go down though.

Especially when you first start out, if you dive directly to a rules based approach you will have a harder time seeing the forest for the trees and depending on the tool used will be frustrated that you can’t move that lens backward. In other words dealing with individual infections IS key but if you are so focused on the individual detections you lose sight of the bigger picture you can do yourself a disservice. On the other hand if you just do a statistical approach and never grow you are going to miss the needles you need to find.

I would argue there is a direct correlation between shop maturity and the ability to full leverage rule based technology. If you are just starting out I suggest you will see more value with a log management, statistical threat detection methodology. This will allow you to get to know your data – as strange as that might sound – which will allow you to better dial in your rule based solutions.

Wednesday, October 19, 2011

What's the value of chasing alerts and other musings

I don’t know what your thoughts on this are but I’m trying to work out an illustration on how the churn related to the hamster wheel of your run of the mill incident detection and response doesn’t really lead to a whole lot of increased security posture or reduced risk – at least not directly or by itself. Don’t get me wrong, that work needs to be done and isn’t a trivial component of your overall program. At the same time I think this is one of those things where activity doesn’t necessarily indicate/translate/equal accomplishment. Great – you cleaned X machines with Y malware. Next week it will be N machines with Z malware. Soooo….does that mean you are more or less secure than the month before?

I’m of the general opinion that an improved security posture/reduced risk/reduced exposure is a by-product of doing analysis on the information gathered from your incidents and using that to drive change in either various configuration settings or policies (or both). Not rocket science or an original thought really. Hopefully that makes some sense.

Thursday, June 2, 2011

SIEM alerts and an analogy to help people 'get it'

It may just be me but does anyone else struggle with people in general and over-arching 'management' not really getting it that when you get or see a security based event from your LM or SIEM; that this is just the beginning of a process and not the end? I mean they say they get it but you just have this feeling that they really don’t get it? Now I’m not saying my statement reflects my current management, it is more a generalized observation as you start to bring people into the conversation who might not have been exposed to this sort of technology or the exposure is limited to a conversation about needing to monitor something. And there is a level of irony as some of these folks are in the actual vendor space.

I’m sort of an analogy guy and this has been one of those things that has plagued me for some years now – what is a good equivalence that helps people get it? I thought of one today that might work but still feels sort of rough. Figured I would use this as a sounding board and hopefully get some feedback. Who knows, maybe just putting it down on paper will help refine it.

While driving around in your car you generally are vaguely aware of, but may or may not pay a whole lot of attention to, your gas gauge. Some probably have more of an alert mentality and wait for the gas to get to the last X% when you get the audible chime indicating you are low. You now are more focused on the situation and ‘remediate’ the issue by stopping off at a gas station and filing up. I think this is somewhat analogous what ‘management’ thinks of when they hear things along the line of monitoring alerts. That the process is generall closed and ends when you get the alert. Alert > gas station > fixed. The problem is that isn't what we are talking about. SIEM/LM alerts, I think, are more akin to all of the gauges in your car that don’t even show until there is an issue. Things like low oil pressure, over heating, your engine service light. Any of those things coming on means something isn’t right somewhere but unless you pop the hood and/or break out some diagnostic equipment you don't know the severity. Could be a complex, multi-component breakdown or just something a little out of whack that takes a simple fix. Oh and don't forget the number of things that can happen to your car that you don't have a gauge for: breaks, suspension, tire alignment, etc. Multiply that picture by the number of end point systems you have in your environment and I think we are getting a little closer to a decent analogy. The point is the alert often isn't actionable other than letting you know you need to start an investigative process.

What do you think - does it work? Do you have suggestions, tweaks, or better analogies that have worked for you as you try to convey to people that monitoring alerts is more than just sitting around waiting for them to pop in your inbox?

Friday, May 6, 2011

The mortar between your defenses

The other day I read a bit by Andreas M. Antonopoulos on Networkworld about how to be an effective security buyer. Of course when it came to finding the article again when I wanted to write this….I couldn’t find it. +1 to the Interwebs though because Mike Rothman over at Securosis mentioned it in Wednesday's Incite 4 U. Andreas’ advice seems to be when you are buying security tools to not buy something designed to fulfill a singular function. Instead go for multi-purpose tools that can cover down on multiple areas. I think the idea somewhat boils down to knocking out two birds with one stone + it sucks to have to look at one dashboard for each tool you have. Enterprise resource scaling aside though I tend to agree with Mike’s take. What really stood out to me was an analogy Andreas used:

Wednesday, June 16, 2010

Monitor vs Alert

Several weeks ago now I took some money out of the ATM and the bottom of the transaction receipt showed a balance that seemed a bit low to me. I brought it up to my wife at lunch and watched her mentally parse a multitude of variables to include the day of the month, what bills had been paid, and the times and amounts of money that move out of checking and into sub accounts. About 5 seconds later she said the amount sounded about right. This is the difference between monitoring and alerting.

Don't get me wrong. My wife isn't the type that pulls up the account information every day to check the ebb and flow of cash but by doing the bills and interacting with the accounts she had a familiarity for how and when the money does flow. The trick in my mind with a SIEM is to figure out the balance for both types of actions. Not only do you have to balance how and where analysts get data but also things like how much should be simply monitored vs alerted on, if you create reports is there too much or too little data in them to be actionable, if the report is on a particular data source can you give it some historical context or cross reference it against other data sources.

To many of those ends my new favorite report template in ArcSight is one that comes out of the box. 4 charts on one page and then a table. Most of the daily reports we have been creating were of the one table variety with a level of detail in the information - source, destinations, times, counts, etc. While we humans are good at visually picking out patterns and nuance things out of spreadsheets putting what amounts to a rolled up summary of top information with the multiple charts at the top of the report is great. Now within the report not only have you framed aspects of the data but since more detailed information is in the table section readers can pull out an IP/user name/whatever from the top page and then search for it within the report.