Wednesday, October 14, 2020

User Aware Splunk Dashboards

One of the more interesting aspects of Splunk is giving users direct access to raw data. This is great on so many levels from a troubleshooting or investigative perspective. However there are times when you'd rather or need to give people, let's call it, a guided experience to what data they see. This is particularly true when within the same overarching or umbrella organization you have separate lines of business or groups of people such that you don't want to give people direct access to the data; you want/need to limit access at more of the UI level. You might have some data source like a vulnerability scanner where all of the data is coming into one index and you want to give people access to only the scan data that applies to them. One way to accomplish this is by adding search time restrictions to particular roles. While effective this approach can get complex very quickly. The following Splunk .conf talk gets into some great detail (link).  Another approach is to slice and dice what index the data goes into as it is indexed based on the user groups you have setup. This can be effective as well but then data from that singular tool is all over the place and what happens if you are using something like CIDR blocks to map data to index and those CIDR blocks change? In this article I'm going to get into a third approach that is making the dashboard user aware and displays information based on who the user is without giving them native access to the data.

I should say there are likely several ways to accomplish this that might be more efficient or work better for particular use cases. This worked for me though and can be a good starting point. If you know of other ways to limit data access at the UI level I'd love to hear about them; feel free to put them in the comments!

Tuesday, May 26, 2020

Drilling into the OTHER category in Splunk

So what has broken my 3 year blog posting hiatus you might ask? Some nerd-like delight in working through a Splunk dashboard capability I didn't realize was there!

Several days ago some fellow Splunk users asked if there was a way to drill into the "OTHER" category. They had an overview dashboard with a bar chart viz allowing the user to pivot to a more detailed interactive dashboard. The challenge was the overview graphic leveraged Splunk's ability to show the top N results with the rest of the results up as OTHER. The interactive dashboard didn't like receiving OTHER as that wasn't a value in the data. I tried a few different approaches but they honestly didn't work. Through that effort though I stumbled upon the ability to set a condition match in XML.

Wait whuuut? I've known about condition match from adjusting navigation bars in Splunk and it turns out this capability is also available in dashboards themselves. Could I use this mechanism for the use case at hand?

Saturday, January 14, 2017

Adjusting Splunk forwarder phonehome / throughput

I was in the process of writing up a few things for a new EDU that is going to be spinning up a larger scale Splunk environment and figured if I was going to the effort it might as well be placed here for others to see. In working with my own environment today I realized I was making some adjustments that I take for granted but that we had to learn and bake in. For this installment these items are focused on the following:

  1. Adjusting the forwarder to deployment server phone home interval
  2. Allowing forwarders to send more than 256 kbps

Sunday, November 20, 2016

Find saved searches in Splunk that are failing

I hope to circle back to this eventually. Until then --- enjoy:

index=_internal log_level=ERROR SavedSplunker | stats count as Count by host message | rex field=message "savedsearch_id=\"(?<Author>[^;]+);(?<App>[^;]+);(?<Search>[^\"]+)\"(?:, message=)?(?<Message>.+)" | table host App Search Author Message Count | eventstats sum(Count) as total by host | eventstats sum(Count) as foo by host App | sort -total -foo -Count | fields - total foo

Saturday, April 9, 2016

Splunk admin tasks after you start getting data in...

I had the rather unique privilege to post a 3 part blog series on Splunk's official site recently. The focus was on some administration tasks Splunk admins should work into their routine. There is a level of assumption when users search in Splunk - these hosts are really these hosts and events that are observed within a time range really happened then. The series talks through a couple methodologies to validate those assumptions

  • Part 1 - Validating host field values: link
  • Part 2 - Validating agent host's system time: link
  • Part 3 - Getting a feel for data ingestion latency: link