Category Archives: NETWORKING


Shoutout to Automine, l00py & the CAST Team.

Still under construction.

Syslog-NG config for stats collection



source s_local {
internal(); # syslog-ng internal logs


destination d_logstats { file(“/var/log/sng_logs/$HOST/logstats.log” perm(0775) create_dirs(yes));};

log { source(s_local); filter (f_logstats); destination(d_logstats); };

Output of Syslog-NG stats collection

2016-07-29T15:18:23+00:00 n00b-splkufwd-03 syslog-ng[9673]: Log statistics; processed=’′, stamp=’′, processed=’′, stamp=’′, processed=’center(received)=19′, processed=’src.internal(s_local#0)=19′, stamp=’src.internal(s_local#0)=1469805203′, dropped=’dst.file(d_logstats#0,/var/log/sng_logs/n00b-splkufwd-03/logstats.log)=0′, processed=’dst.file(d_logstats#0,/var/log/sng_logs/n00b-splkufwd-03/logstats.log)=17′, stored=’dst.file(d_logstats#0,/var/log/sng_logs/n00b-splkufwd-03/logstats.log)=0′, dropped=’dst.file(d_catchall#0,/var/log/sng_catchall/n00bfirewall/n00bfirewall.log)=0′, processed=’dst.file(d_catchall#0,/var/log/sng_catchall/n00bfirewall/n00bfirewall.log)=229′, stored=’dst.file(d_catchall#0,/var/log/sng_catchall/n00bfirewall/n00bfirewall.log)=0′, processed=’destination(d_logstats)=17′, processed=’source(s_catchall)=0′, processed=’center(queued)=1450′, processed=’src.none()=0′, stamp=’src.none()=0′, dropped=’dst.file(d_file#0,/var/log/messages)=0′,


monitor:///var/log/sng_logs/.../logstats.log] disabled = 0
followTail = 0
host_segment = 4
index = n00blab
sourcetype = syslog


[syslogng_stats] REGEX = \s(?[a-z]+='[^’]+)=(?[^’]+)


[syslog] KVMODE = NONE
REPORT-syslogng_stats = syslogng_stats


index=n00blab sourcetype=syslog
| reverse
| streamstats window=2 current=t range(processed_*) as processed_*_delta, range(dropped_*) as dropped_*_delta, range(stored_*) AS stored_*_delta
| timechart span=5m avg(*_delta) as *_delta

Screen Shot 2016-07-29 at 5.25.37 PM



Splunk SNMP data with Cacti Mirage

Splunk SNMP data

Splunk SNMP data was the first use case I thought of when I met Splunk.

Oh the grandiose plans I had for killing my various Cacti installs with this new, fancy, dynamic tool. All the customization I wanted was already done for me in Splunk. Finally, I could just focus on working with the data.

The potential was limitless.

That was 3 years ago.

In that time I spoke to many a Splunk enthusiast and one thing was abundantly clear. There was no “standard” way to Splunk SNMP. It was that one thing that everyone did differently. Different pollers, different environments and custom solutions. There was no simple one line answer.

And my Cacti installs lived on….

New Year, New SNMP

This year, my new year’s resolution was to finally build THE way for me to reliably and efficiently collect SNMP data en mass (100K sources +) and ingest into Splunk.

The one thing I, and all the Splunkers I spoke with agreed on was, the polling of SNMP data was best left to a downstream agent. Just feed Splunk the output and let it do what it does best.

I set out and tested various SNMP pollers – Cacti, rrdtool itself, collectd [ shout out to collectd for nailing the flexible output options, but the non-cron poller – opened a ticket/feature request on it – and the config management…I just couldn’t go that route in my environment], OpenNMS [ sweet option for writing to an interface, but again was worried at scale ], Solarwinds [too much for what I really needed], Splunk SNMP modular Input [ Again, solid job, but no cron made in non-viable for me ], etc, etc, etc.

I was looking for the perfect mix of easy SNMP administration/supportability and reliable output formatting for Splunk. Plus, I already had multiple reliable Cacti deployments with 4+ years uptime, with over 50k data sources each, so if there was a way I could retrofit something to use that existing infrastructure it would be a big win.

In my opinion and to my surprise, old faithful Cacti, who I had set out to destroy, still reigned supreme when it came to easy, reliable, management of massive amounts of SNMP, but for one major flaw….

SNMP polling output.

How would we get the data out of Cacti and into Splunk??

Mirage is born

I enlisted the help of my some amazing friends and colleagues and after a failed attempt at ingesting rrd files, soon it had dawned on us:


How I didn’t think of this 3 years ago is beyond me.

We utilized Cacti’s API architecture to design a plug-in that mirrors Spine SNMP poller data to log file before it is written to rrds, outputting beautifully structured, easy to consume KV pairs.

Mirage was born.

We then built Splunk add-on to help enrich the data we get, leveraging Cacti’s treasure trove of data sitting in it’s MySQL db.

A week later, when we stood back and watched it all at work, both in a large ISP production environment and at home in our mini-labs, it was clear we could finally cross Splunk SNMP off our whiteboards.

Over the next couple months, we will be hard at work iterating over the monster we have created, and we hope we can get some feedback from, what I believe to be, a large community of sysadmins looking for a way to leverage all that work they did building out Cacti,  as we move to the next generation of IT Ops monitoring and intelligence tool-sets.

Splunk SNMP data with Cacti Mirage and let us know what is working and what opportunities there are to make Mirage even better!


To Splunk SNMP data with Cacti Mirage as your acquisition layer,  you will require:


Before you can Splunk SNMP data, you must acquire it by installing the Mirage Plug-in for Cacti. Mirage allows you to mirror Cacti poller data to log file for easy ingestion into other systems. This also allows you to breathe new life into your Cacti installs by using Cacti as your more than capable SNMP poller (assuming you are using spine), while letting more modern upstream systems handle the rest.

Connect to the Cacti server cli and navigate to CACTI_HOME/plugins ( likely /usr/share or /var/www )

cd /var/www/cacti/plugins

wget -O


mv mirage-master mirage

chown -R www-data:www-data mirage

Log into the Cacti GUI and navigate to the Console > Plugin Management.
Click to install mirage, then to enable it.
Navigate to Console > Settings
Review the Mirage settings and save.
Wait until next Cacti poll and ensure the mirage_poller_output.log is populated in CACTI_HOME/cacti/log/

cd /var/www/cacti/log

ls -la

root@n00bserver:/var/www/cacti/log# ls -la

total 180
drwxr-xr-x  2 www-data www-data  4096 Jan 29 00:45 .
drwxr-xr-x 14 www-data www-data  4096 Jan 26 10:19 ..
-rw-r–r–  1 www-data www-data 93118 Jan 29 01:45 cacti.log
-rw-r–r–  1 www-data www-data    33 Jul 19  2015 .htaccess
-rw-rw-r–  1 www-data www-data 68326 Jan 29 01:45 mirage_poller_output.log


Now to install the Splunk Cacti Mirage TA. The following will describe a stand-alone Splunk enterprise deployment. For distributed environments, install Cacti Mirage TA on SH, FW & IX.

Start by creating a ‘cacti’ index on your indexer.

Settings > index > New

Name it ‘cacti’ and save with all defaults.

Return to the CLI

cd /opt/splunk/etc/apps

wget -O


cd Splunk_TA_Cacti-master/

mkdir local

cd default

cp inputs.conf /opt/splunk/etc/apps/Splunk_TA_Cacti-master/local

cd ../local

vi inputs.conf

Check to ensure the paths to the mirage_poller_output.log, & cacti.log are correct for your environment, and ensure to set them to disabled= false. Set the cron to fire as desired (example below fires every day at 6 am). This scripted input must run to ensure the Splunk lookups can enrich your SNMP data, so schedule it soon after the time you finish editing the inputs if you want to play right away!

[monitor:///var/www/cacti/log/mirage_poller_output.log*] disabled = false
index = cacti
sourcetype = cacti:mirage

[monitor:///var/www/cacti/log/cacti.log] disabled = false
index = cacti
sourcetype = cacti:system

[script://./bin/ /var/www/cacti]
source =
disabled = false
index = cacti
sourcetype = cacti:lookup:mirage
interval = * 6 * * *
#interval = 86400

Save the inputs.conf


Restart Splunk

cd /opt/splunk/bin

./splunk restart

Once Splunk restarts, and your configured cron has fired,  log into the Cacti Gui and confirm has run by examining the ‘Cacti Polling & Lookups Status‘ dashboard. If you do not see it, go back and alter the cron in and restart, or wait till it’s next run.

Screen Shot 2016-01-29 at 1.36.10 AMNow you can run the ‘Cacti lookup mirage build‘ outputlookup to enrich your SNMP eventsScreen Shot 2016-01-29 at 1.37.28 AMOnce built you can now search the Cacti Quick chart or use SPL to chart your data.

Screen Shot 2016-01-29 at 1.40.23 AMScreen Shot 2016-01-29 at 1.49.56 AM

For a quick browse of what you have ingested try:

eventtype=cacti:mirage host=n00bserver | top limit=0 name_cache

Screen Shot 2016-01-30 at 5.35.59 PM

Important Fields:

  • ldi (local_data_id) – This is the unique data souce id in Cacti
  • rrdn (rrd_name) – The type of KPI/measurement in the rrd
  • rrdv (rrd_value) – The value the poller was writing to the rrd.
  • name_cache – Cacti Data Source Name/Name of the graph
  • data_source_type_id – 1 = Gauge, 2= Counter
  • host – The Cacti Server
  • hostname – The device Cacti is querying
  • IP – IP address of the hostname

For those who may want to play with the local_data_id lookup/db mappings:


Sample Splunk search for graphing cpu on a server

eventtype=cacti:mirage rrdn=cpu hostname=n00bserver
| timechart max(rrdv) by name_cache

You now have the ability to utilize Cacti collected SNMP data in Splunk!

Stay tuned for how to monitor this data with Splunk Machine Learning!

How To Set Up Raspberry Pi Without Monitor (DHCP, ya you know me!)

In this article we will take the SD card containing Raspbian Wheezy that we created in my last post, and we are going to boot the Pi blind, aka set up Raspberry Pi without monitor.

If you plan to use the Pi as a server rather than a desktop environment, you do your work from the command line, or you just want to save money on accessories, we can completely remove any need for a monitor, keyboard, mouse, etc by taking advantage of DHCP.

Continue reading How To Set Up Raspberry Pi Without Monitor (DHCP, ya you know me!)