Today in the car I heard a credit card debt consolidation commercial that sort of drove me crazy. While there is a place for those companies the last line got to me - “If credit card debt is the problem, we are the solution.” Heads up there high speed - the real problem is you can't stop buying stuff you can't afford! Your credit card debt is just the visible symptom. Treating the symptom instead of the problem only lands you back in the same spot. This goes hand in hand with the diet pills that basically say take this magical pill to lose weight and that you don't even have to change your daily habits....like over eating and getting no exercise...which is what got you where you are.
Showing posts with label Use Case. Show all posts
Showing posts with label Use Case. Show all posts
Wednesday, March 7, 2012
Friday, February 17, 2012
News flash – IP addresses aren’t computers
Crazy thought I know but it isn’t hard not to get caught up in that mentality. I was trying to think of a way to tell the story of resources/logs needed to be able to be able to identify sources of badness in incident response of one flavor or another. Visually I was drawing that out somewhat like IP ~ Computer Name ~ User Name all at the top level and branching under that you have various logs like DNS, DHCP, asset management, authentication, etc. All of which play a part in being able to answer questions related to what computers are infected and which users are doing ‘bad things.’ Anyway, it wasn’t until I put that down on the whiteboard that the thought hit me in that IP addresses are a supporting factor in identifying a particular computer and not equal to it. Funny what tool limitations will do to your thinking.
If none of that makes sense to anyone other than me I blame the Nyquil.
If none of that makes sense to anyone other than me I blame the Nyquil.
Friday, May 6, 2011
The mortar between your defenses
The other day I read a bit by Andreas M. Antonopoulos on Networkworld about how to be an effective security buyer. Of course when it came to finding the article again when I wanted to write this….I couldn’t find it. +1 to the Interwebs though because Mike Rothman over at Securosis mentioned it in Wednesday's Incite 4 U. Andreas’ advice seems to be when you are buying security tools to not buy something designed to fulfill a singular function. Instead go for multi-purpose tools that can cover down on multiple areas. I think the idea somewhat boils down to knocking out two birds with one stone + it sucks to have to look at one dashboard for each tool you have. Enterprise resource scaling aside though I tend to agree with Mike’s take. What really stood out to me was an analogy Andreas used:
Saturday, January 8, 2011
ArcSight - The SIEM Lego Set. Take 2
I wanted to post something a little more positive when it comes to the ArcSight Lego concept. Several months ago a group at work was charged with justifying a particular line item of their budget relating to the use of online resources and subscription fees. What they didn’t have was a way to link users to particular site browsing. The issue was bounced around just a bit until it hit my plate and with ArcSight’s feeds the solution was fairly easy to craft (though the devil is in the details). Everyone’s environment is different and different vendors/solutions generate different logs. Again, I don’t have access to other SIEM solutions so not sure how easy or hard coming up with a similar solution would be. While this isn’t specifically a security use case, the concepts or individual elements could be useful for one down the road. I have reused the login tracker a number of times.
Monday, December 13, 2010
ArcSight - The SIEM Lego Set
(Or for Chris' sake the Erector set!)
This post came mostly out of a comment and thread post on the internal ArcSight forums. The same person made them both as it turns out (Vini is the man).
Here’s the issue. I believe one of the design goals for the ESM is to move to more of a Trend based Dashboard system vs Active Channels. If that isn’t one of the official goals it is something I am at least working on developing content around at least because it just makes sense. Here is the scenario – let’s say you wanted to track failed logins over X period of time. Your choices are really a Data Monitor or Trend (or a report that runs against the raw events I guess). If you go the data monitor route you can quickly get to an active channel to see the base events…but you also have to wait for the active channel to open against the entire length of time the data monitor covers and your analysis is based on what you can visually parse line by line in the active channel. On top of that you are limited on what/how you can display with the top value counts data monitor (which is the data monitor I would use for this sort of thing) nor can you create a report based on the DM itself. Something like so which is basically an unmodified stock data monitor:
I want control over what fields to display in my initial table/view. In this case raw counts and number of unique host machines. Because this is really a query viewer I can then define a number of other query viewer drill downs to extract more data out of these numbers. This starts to get into my other post about wanting to quickly be able to query Trends or other structures in ESM like you can starting in Logger 4.5 because there will always be some new way to look at the data.
The thing is because it is a Lego set I can build it to suit my environment and needs - note the several custom drill downs.The problem is once you elevate a level of content towards a Dashboard in the client (don’t get me started on the thing they call a web console) you get locked into what Trend Dashboards can deliver. At this point that is mainly drill downs which are powerful but in a sense, limited. There aren’t bridges in place to move from a Dashboard to an Active Channel for instance (many issues relative to field mapping) nor are there variables allowing you to reach into Trends like you can with Active Lists (unknown level of difficulty) nor is there currently a way to quickly move content from the backend of one Dashboard to the next (very ambiguous developmental request). Because it is a Lego set Lord only knows how people will or have developed their content which I’d have to believe makes it just a wee bit challenging for the developers to develop all these things I want.
After all that is said and done the REAL issue (which I sort of intimated in the previous paragraph) is who cares about failed logins or "A"piece of content! What you really need is a way to answer the NEXT question – how does that username/system/port/source/destination/pattern relate to all my other content? I feel like it is like a scene from NCIS.
DiNozzo: Hey boss, here is a list of the top 10 usernames with failed logins from the last 24 hours! I printed it off the Dashboard (huge goofy grin on face; holding the paper forward like it is the answer to world hunger)
Gibbs: …… What systems were they on?
DiNozzo: …uhmm (grin starts to fade)
Gibbs: And how do they relate to the dropped outbound connections on the firewall….and while you are at it what internal machines have they touched? This is a SIEM that is supposed to correlate events from all our event streams right?
DiNozzo: working it boss! (runs away; grin totally gone)
The problem is putting the Legos together so that the content isn’t done in a linear, dead end fashion that resembles spokes on a wheel – always moving AWAY from the rest of the content. And to make things more interesting the data is dynamic – the top 10 x in time period Y will likely always be different. The data needs to be actionable; only "actionable" will likely change from incident to incident as the threatscape is so fluid. This is where and why you need things like filters for Trends, the ability to do conditional statements in Trend queries, etc. Why? Because the data I want is already captured! I just have a new way I need to look at it. I don’t want to have to recreate a Trend or Active List every time a new use case comes around (MAJOR PITA and time sink since I have to recreate all downstream content) and I don’t want to create another Trend to duplicate the storage. I have one rule that basically writes the same data to 3 different Active Lists and the event firing itself is captured in 2 different Trends. I don't want to have to do that.
I feel like that one sort of got away from me a bit.
I just need the pieces of my ArcSight ESM Lego set to fit together better to allow me to use a piece from this set and one or two from that set. To understand next week's issue better I will probably have to put another piece on or reorder the first 3 pieces.
This post came mostly out of a comment and thread post on the internal ArcSight forums. The same person made them both as it turns out (Vini is the man).
Here’s the issue. I believe one of the design goals for the ESM is to move to more of a Trend based Dashboard system vs Active Channels. If that isn’t one of the official goals it is something I am at least working on developing content around at least because it just makes sense. Here is the scenario – let’s say you wanted to track failed logins over X period of time. Your choices are really a Data Monitor or Trend (or a report that runs against the raw events I guess). If you go the data monitor route you can quickly get to an active channel to see the base events…but you also have to wait for the active channel to open against the entire length of time the data monitor covers and your analysis is based on what you can visually parse line by line in the active channel. On top of that you are limited on what/how you can display with the top value counts data monitor (which is the data monitor I would use for this sort of thing) nor can you create a report based on the DM itself. Something like so which is basically an unmodified stock data monitor:
I want control over what fields to display in my initial table/view. In this case raw counts and number of unique host machines. Because this is really a query viewer I can then define a number of other query viewer drill downs to extract more data out of these numbers. This starts to get into my other post about wanting to quickly be able to query Trends or other structures in ESM like you can starting in Logger 4.5 because there will always be some new way to look at the data.
The thing is because it is a Lego set I can build it to suit my environment and needs - note the several custom drill downs.The problem is once you elevate a level of content towards a Dashboard in the client (don’t get me started on the thing they call a web console) you get locked into what Trend Dashboards can deliver. At this point that is mainly drill downs which are powerful but in a sense, limited. There aren’t bridges in place to move from a Dashboard to an Active Channel for instance (many issues relative to field mapping) nor are there variables allowing you to reach into Trends like you can with Active Lists (unknown level of difficulty) nor is there currently a way to quickly move content from the backend of one Dashboard to the next (very ambiguous developmental request). Because it is a Lego set Lord only knows how people will or have developed their content which I’d have to believe makes it just a wee bit challenging for the developers to develop all these things I want.
After all that is said and done the REAL issue (which I sort of intimated in the previous paragraph) is who cares about failed logins or "A"piece of content! What you really need is a way to answer the NEXT question – how does that username/system/port/source/destination/pattern relate to all my other content? I feel like it is like a scene from NCIS.
DiNozzo: Hey boss, here is a list of the top 10 usernames with failed logins from the last 24 hours! I printed it off the Dashboard (huge goofy grin on face; holding the paper forward like it is the answer to world hunger)
Gibbs: …… What systems were they on?
DiNozzo: …uhmm (grin starts to fade)
Gibbs: And how do they relate to the dropped outbound connections on the firewall….and while you are at it what internal machines have they touched? This is a SIEM that is supposed to correlate events from all our event streams right?
DiNozzo: working it boss! (runs away; grin totally gone)
The problem is putting the Legos together so that the content isn’t done in a linear, dead end fashion that resembles spokes on a wheel – always moving AWAY from the rest of the content. And to make things more interesting the data is dynamic – the top 10 x in time period Y will likely always be different. The data needs to be actionable; only "actionable" will likely change from incident to incident as the threatscape is so fluid. This is where and why you need things like filters for Trends, the ability to do conditional statements in Trend queries, etc. Why? Because the data I want is already captured! I just have a new way I need to look at it. I don’t want to have to recreate a Trend or Active List every time a new use case comes around (MAJOR PITA and time sink since I have to recreate all downstream content) and I don’t want to create another Trend to duplicate the storage. I have one rule that basically writes the same data to 3 different Active Lists and the event firing itself is captured in 2 different Trends. I don't want to have to do that.
I feel like that one sort of got away from me a bit.
I just need the pieces of my ArcSight ESM Lego set to fit together better to allow me to use a piece from this set and one or two from that set. To understand next week's issue better I will probably have to put another piece on or reorder the first 3 pieces.
Thursday, September 16, 2010
Of Logs...and crap. Or is that the crappyness of logs?
The problem with looking through logs is its a little like looking through a whole lot of poop. Metaphorically speaking. I mean I have never really spent a whole lot of time looking through or pondering poop so I’m reaching here a bit. That isn’t to say there isn’t value in looking at it. Once you get a baseline, changes in color, volume, frequency, consistency etc can all point to a person’s general health. There comes a point though where all it is really saying is someone ate something sometime. The point is logs generally tend to fall into the same category. They are evidences of things that have happened. The problem is “what happened” doesn’t always translate well to “why something happened”. That might sound a bit crazy but walk with me a bit. The sales pitch of a SIEM vendor generally goes like this. “What if a user goes to a malicious site…or gets an infected file…or the user plugs in an infected USB, gets infected, and then the computer starts doing X, Y, and Z. Wouldn’t you want to see/alert on that?” Of course. But while the scenario sounds good what you have heard, even on a subconscious level, is the SIEM will be able to work backwards and tell you why something happened. In reality not only do you not (generally) start out knowing the computer is infected (aka why it generated the logs), the events it does create are drowning in a steaming cesspool…cessocean of crap from all over. The damn dingleberrys (ahem..sorry) are hiding in millions and millions of events.
Thursday, June 3, 2010
Arcsight '10 User Conference - Presentation Idea
Sadly its not looking good for the presentation idea I submitted for the ArcSight '10 User Conference. Don't get me wrong - I know I am a little fish in a big pond but I really think there is some value. How much value there is relative to other items on the plate is another topic and one ultimately they will decide.
What I had hoped to talk about was the framework/system I designed to let anomalous activity "bubble up" to the surface without elaborate and extensive use cases. Anomalous activity and systems almost in a sense will triage themselves. While many large, 24x7 SOC operations probably have people watching events scroll by and react in near real time there are many SMB types out there that simply can't sustain that type of op / op tempo. Of the few companies and individuals I have interacted with who also use ArcSight or another SIEM/LM tend to fall into the category where if they don't have a 24x7 shop they spit out daily reports that are reviewed in the morning w/o really leveraging all that their SIEM can provide. There aren't a whole lot who have comfortably found the middle ground. Granted my data set for that observation is fairly small.
Don't get me wrong - this isn't a silver bullet or an especially magical use of the product. It does however provide an extensible framework that will alert users and then provides quick access to pertinent live and historically siloed data when they open the ESM console without having to rout around.
Of course one of my hidden agendas was for other experts to tell me how much better their systems are. That would give me additional insight and help me add to and refine my own. I also hoped it would spark discussion along the lines of best SIEM/ArcSight use for SMB types.
Update: I gave up hope too soon. ArcSight has accepted the presentation idea and asked me to run one of the breakout sessions. Am excited!
What I had hoped to talk about was the framework/system I designed to let anomalous activity "bubble up" to the surface without elaborate and extensive use cases. Anomalous activity and systems almost in a sense will triage themselves. While many large, 24x7 SOC operations probably have people watching events scroll by and react in near real time there are many SMB types out there that simply can't sustain that type of op / op tempo. Of the few companies and individuals I have interacted with who also use ArcSight or another SIEM/LM tend to fall into the category where if they don't have a 24x7 shop they spit out daily reports that are reviewed in the morning w/o really leveraging all that their SIEM can provide. There aren't a whole lot who have comfortably found the middle ground. Granted my data set for that observation is fairly small.
Don't get me wrong - this isn't a silver bullet or an especially magical use of the product. It does however provide an extensible framework that will alert users and then provides quick access to pertinent live and historically siloed data when they open the ESM console without having to rout around.
Of course one of my hidden agendas was for other experts to tell me how much better their systems are. That would give me additional insight and help me add to and refine my own. I also hoped it would spark discussion along the lines of best SIEM/ArcSight use for SMB types.
Update: I gave up hope too soon. ArcSight has accepted the presentation idea and asked me to run one of the breakout sessions. Am excited!
Wednesday, June 2, 2010
"Good enough" ArcSight/Use Cases
There have been a couple articles that have popped up here and there that seem to have had their base in Duncan Hoopes’ FUDsec article relative to things being “good enough” – well if not their base then a similar theme.
Last week I took some time to drill into several Win2k8 failed login events and how ArcSight was parsing them. For event 4625 (which replaced 10 Win2k3 events) I was surprised to find a rather key piece of data - sub status code – stuck in a field you can’t query on and isn’t consistent with where the same data was parsed and dumped in the corresponding Win2k3 events. What is key about these codes is it lets the reader know WHAT the condition was surrounding this particular login failure – user account doesn’t exist, account was locked out, account is currently disabled, etc. It would be easy to question why hasn’t ArcSight “fixed” this issue. The better thing to ask though to me is why hasn’t anyone over the last 2 years plus that the OS been out actually brought the issue up to ArcSight in the first place?
Maybe I look at our service contract different than others and honestly you can make an override relatively easy; bypass the issue and move on. Why not bring it up and try to get the thing fixed at a macro level….unless people don’t have content that involves that data and aren’t looking at it anyway? Is this a case of simply getting the ball on the green and wrapping your arms around just the event is good enough? Don’t get me wrong I don’t have a ton of content built around each of the sub messages but I do throw them into the Trend that is tracking failed logins where they eventually show up in multiple reports. I mean wouldn’t you want to differentiate between 600 failed login attempts but that they were because some idiot used his domain credentials on a service account and didn’t change the password vs 600 failed login attempts for an account that didn’t exist?
Last week I took some time to drill into several Win2k8 failed login events and how ArcSight was parsing them. For event 4625 (which replaced 10 Win2k3 events) I was surprised to find a rather key piece of data - sub status code – stuck in a field you can’t query on and isn’t consistent with where the same data was parsed and dumped in the corresponding Win2k3 events. What is key about these codes is it lets the reader know WHAT the condition was surrounding this particular login failure – user account doesn’t exist, account was locked out, account is currently disabled, etc. It would be easy to question why hasn’t ArcSight “fixed” this issue. The better thing to ask though to me is why hasn’t anyone over the last 2 years plus that the OS been out actually brought the issue up to ArcSight in the first place?
Maybe I look at our service contract different than others and honestly you can make an override relatively easy; bypass the issue and move on. Why not bring it up and try to get the thing fixed at a macro level….unless people don’t have content that involves that data and aren’t looking at it anyway? Is this a case of simply getting the ball on the green and wrapping your arms around just the event is good enough? Don’t get me wrong I don’t have a ton of content built around each of the sub messages but I do throw them into the Trend that is tracking failed logins where they eventually show up in multiple reports. I mean wouldn’t you want to differentiate between 600 failed login attempts but that they were because some idiot used his domain credentials on a service account and didn’t change the password vs 600 failed login attempts for an account that didn’t exist?
Wednesday, May 19, 2010
Hidden value of Windows 673/4769 events in your SIEM
If you are like me you are always (or in stages) on the lookout for “noise” events to filter out of the SIEM. Windows event 673 is fairly tempting in that regard. However, depending on what sources you are pulling in you can leverage these events, which are recorded on your DCs, to see PCs hitting other PCs. There are two main limitations to these events. The first is you can’t see what network resource on the destination the source is trying touch. The second is you can’t see if the attempt was successful or not. If you REALLY needed that information though you probably have the appropriate level of logging turned on at the destination anyway.
Labels:
ArcSight,
Use Case,
Windows Event 4769,
Windows Event 673
The complete guide to SIEM use cases
I started looking for information on the web relative to SIEM use cases over a year ago. Almost a self search for just-in-time learning as it were. Unfortunately the list doesn’t exist. I think I have come to terms with that fact /wipes tear. The reality is everyone’s environment is different. Different tools, different event sources, different size shops, different foci, etc etc.
Don’t get me wrong. There are nuggets “out there” just…spread around. Hopefully I can throw a few things out there as well as this progresses. Always looking for ideas whether complete or still in concept phase. Would also be interested in getting a feel for good classes out there. (hit me at m j runals at gmail dot com if you don’t want to discuss in the comments).
At any rate I spent some time yesterday marrying a couple different items we created within our ArcSight ESM. I was left with two thoughts:
1. I used a good bit of Query Viewers and variables to pull this off (looking forward to global variables with 5.0) and was left with a feeling like I was building an application within an application. The feeling was very strange though I can’t really put my finger on why….could use more sleep I guess.
2. I want a way to combine multiple queries into one trend or chart.
Don’t get me wrong. There are nuggets “out there” just…spread around. Hopefully I can throw a few things out there as well as this progresses. Always looking for ideas whether complete or still in concept phase. Would also be interested in getting a feel for good classes out there. (hit me at m j runals at gmail dot com if you don’t want to discuss in the comments).
At any rate I spent some time yesterday marrying a couple different items we created within our ArcSight ESM. I was left with two thoughts:
1. I used a good bit of Query Viewers and variables to pull this off (looking forward to global variables with 5.0) and was left with a feeling like I was building an application within an application. The feeling was very strange though I can’t really put my finger on why….could use more sleep I guess.
2. I want a way to combine multiple queries into one trend or chart.
Saturday, April 17, 2010
ArcSight Use Cases and....Golf?
A couple weeks ago my wife was watching the Masters and I happened to see Phil Mickelson hit his second eagle in a row. The amazing part to me are all the vectors in play that led the ball to eventually fall into a hole with a 4.25 inch diameter: gravity, ball spin, force upon landing, multiple slopes on the green, etc.
I wondered if there was an analogy between that and ArcSight/SIEM use cases. The problem is I am not sure which side of the analogy they fall on.
On one hand the SIEM content creator could be the golfer and the badness you are trying to capture is the ball. You massage your content so that you see this, followed by that, followed by this other thing and then WHAM the trap springs and you have IDed badness – red flags go up and the alarm gong sounds. In that sense I often think of ArcSight as building a cantilevered mouse trap.
On the other hand the “bad guy” could be both the golfer and the golf ball. The content creator is really just focused on people who hit the green and doesn’t really care about how they got there. If the ball lands in the hole then things are really really bad but the key here is proximity to the hole is enough to alert on/be reviewed.
At a bigger picture level the two ways of looking at the analogy probably represent a SIEM centric vs Log review approach.
I wondered if there was an analogy between that and ArcSight/SIEM use cases. The problem is I am not sure which side of the analogy they fall on.
On one hand the SIEM content creator could be the golfer and the badness you are trying to capture is the ball. You massage your content so that you see this, followed by that, followed by this other thing and then WHAM the trap springs and you have IDed badness – red flags go up and the alarm gong sounds. In that sense I often think of ArcSight as building a cantilevered mouse trap.
On the other hand the “bad guy” could be both the golfer and the golf ball. The content creator is really just focused on people who hit the green and doesn’t really care about how they got there. If the ball lands in the hole then things are really really bad but the key here is proximity to the hole is enough to alert on/be reviewed.
At a bigger picture level the two ways of looking at the analogy probably represent a SIEM centric vs Log review approach.
Subscribe to:
Posts (Atom)