Category Archives: LINUX


Shoutout to Automine, l00py & the CAST Team.

Still under construction.

Syslog-NG config for stats collection



source s_local {
internal(); # syslog-ng internal logs


destination d_logstats { file(“/var/log/sng_logs/$HOST/logstats.log” perm(0775) create_dirs(yes));};

log { source(s_local); filter (f_logstats); destination(d_logstats); };

Output of Syslog-NG stats collection

2016-07-29T15:18:23+00:00 n00b-splkufwd-03 syslog-ng[9673]: Log statistics; processed=’′, stamp=’′, processed=’′, stamp=’′, processed=’center(received)=19′, processed=’src.internal(s_local#0)=19′, stamp=’src.internal(s_local#0)=1469805203′, dropped=’dst.file(d_logstats#0,/var/log/sng_logs/n00b-splkufwd-03/logstats.log)=0′, processed=’dst.file(d_logstats#0,/var/log/sng_logs/n00b-splkufwd-03/logstats.log)=17′, stored=’dst.file(d_logstats#0,/var/log/sng_logs/n00b-splkufwd-03/logstats.log)=0′, dropped=’dst.file(d_catchall#0,/var/log/sng_catchall/n00bfirewall/n00bfirewall.log)=0′, processed=’dst.file(d_catchall#0,/var/log/sng_catchall/n00bfirewall/n00bfirewall.log)=229′, stored=’dst.file(d_catchall#0,/var/log/sng_catchall/n00bfirewall/n00bfirewall.log)=0′, processed=’destination(d_logstats)=17′, processed=’source(s_catchall)=0′, processed=’center(queued)=1450′, processed=’src.none()=0′, stamp=’src.none()=0′, dropped=’dst.file(d_file#0,/var/log/messages)=0′,


monitor:///var/log/sng_logs/.../logstats.log] disabled = 0
followTail = 0
host_segment = 4
index = n00blab
sourcetype = syslog


[syslogng_stats] REGEX = \s(?[a-z]+='[^’]+)=(?[^’]+)


[syslog] KVMODE = NONE
REPORT-syslogng_stats = syslogng_stats


index=n00blab sourcetype=syslog
| reverse
| streamstats window=2 current=t range(processed_*) as processed_*_delta, range(dropped_*) as dropped_*_delta, range(stored_*) AS stored_*_delta
| timechart span=5m avg(*_delta) as *_delta

Screen Shot 2016-07-29 at 5.25.37 PM



Splunk SNMP data with Cacti Mirage

Splunk SNMP data

Splunk SNMP data was the first use case I thought of when I met Splunk.

Oh the grandiose plans I had for killing my various Cacti installs with this new, fancy, dynamic tool. All the customization I wanted was already done for me in Splunk. Finally, I could just focus on working with the data.

The potential was limitless.

That was 3 years ago.

In that time I spoke to many a Splunk enthusiast and one thing was abundantly clear. There was no “standard” way to Splunk SNMP. It was that one thing that everyone did differently. Different pollers, different environments and custom solutions. There was no simple one line answer.

And my Cacti installs lived on….

New Year, New SNMP

This year, my new year’s resolution was to finally build THE way for me to reliably and efficiently collect SNMP data en mass (100K sources +) and ingest into Splunk.

The one thing I, and all the Splunkers I spoke with agreed on was, the polling of SNMP data was best left to a downstream agent. Just feed Splunk the output and let it do what it does best.

I set out and tested various SNMP pollers – Cacti, rrdtool itself, collectd [ shout out to collectd for nailing the flexible output options, but the non-cron poller – opened a ticket/feature request on it – and the config management…I just couldn’t go that route in my environment], OpenNMS [ sweet option for writing to an interface, but again was worried at scale ], Solarwinds [too much for what I really needed], Splunk SNMP modular Input [ Again, solid job, but no cron made in non-viable for me ], etc, etc, etc.

I was looking for the perfect mix of easy SNMP administration/supportability and reliable output formatting for Splunk. Plus, I already had multiple reliable Cacti deployments with 4+ years uptime, with over 50k data sources each, so if there was a way I could retrofit something to use that existing infrastructure it would be a big win.

In my opinion and to my surprise, old faithful Cacti, who I had set out to destroy, still reigned supreme when it came to easy, reliable, management of massive amounts of SNMP, but for one major flaw….

SNMP polling output.

How would we get the data out of Cacti and into Splunk??

Mirage is born

I enlisted the help of my some amazing friends and colleagues and after a failed attempt at ingesting rrd files, soon it had dawned on us:


How I didn’t think of this 3 years ago is beyond me.

We utilized Cacti’s API architecture to design a plug-in that mirrors Spine SNMP poller data to log file before it is written to rrds, outputting beautifully structured, easy to consume KV pairs.

Mirage was born.

We then built Splunk add-on to help enrich the data we get, leveraging Cacti’s treasure trove of data sitting in it’s MySQL db.

A week later, when we stood back and watched it all at work, both in a large ISP production environment and at home in our mini-labs, it was clear we could finally cross Splunk SNMP off our whiteboards.

Over the next couple months, we will be hard at work iterating over the monster we have created, and we hope we can get some feedback from, what I believe to be, a large community of sysadmins looking for a way to leverage all that work they did building out Cacti,  as we move to the next generation of IT Ops monitoring and intelligence tool-sets.

Splunk SNMP data with Cacti Mirage and let us know what is working and what opportunities there are to make Mirage even better!


To Splunk SNMP data with Cacti Mirage as your acquisition layer,  you will require:


Before you can Splunk SNMP data, you must acquire it by installing the Mirage Plug-in for Cacti. Mirage allows you to mirror Cacti poller data to log file for easy ingestion into other systems. This also allows you to breathe new life into your Cacti installs by using Cacti as your more than capable SNMP poller (assuming you are using spine), while letting more modern upstream systems handle the rest.

Connect to the Cacti server cli and navigate to CACTI_HOME/plugins ( likely /usr/share or /var/www )

cd /var/www/cacti/plugins

wget -O


mv mirage-master mirage

chown -R www-data:www-data mirage

Log into the Cacti GUI and navigate to the Console > Plugin Management.
Click to install mirage, then to enable it.
Navigate to Console > Settings
Review the Mirage settings and save.
Wait until next Cacti poll and ensure the mirage_poller_output.log is populated in CACTI_HOME/cacti/log/

cd /var/www/cacti/log

ls -la

root@n00bserver:/var/www/cacti/log# ls -la

total 180
drwxr-xr-x  2 www-data www-data  4096 Jan 29 00:45 .
drwxr-xr-x 14 www-data www-data  4096 Jan 26 10:19 ..
-rw-r–r–  1 www-data www-data 93118 Jan 29 01:45 cacti.log
-rw-r–r–  1 www-data www-data    33 Jul 19  2015 .htaccess
-rw-rw-r–  1 www-data www-data 68326 Jan 29 01:45 mirage_poller_output.log


Now to install the Splunk Cacti Mirage TA. The following will describe a stand-alone Splunk enterprise deployment. For distributed environments, install Cacti Mirage TA on SH, FW & IX.

Start by creating a ‘cacti’ index on your indexer.

Settings > index > New

Name it ‘cacti’ and save with all defaults.

Return to the CLI

cd /opt/splunk/etc/apps

wget -O


cd Splunk_TA_Cacti-master/

mkdir local

cd default

cp inputs.conf /opt/splunk/etc/apps/Splunk_TA_Cacti-master/local

cd ../local

vi inputs.conf

Check to ensure the paths to the mirage_poller_output.log, & cacti.log are correct for your environment, and ensure to set them to disabled= false. Set the cron to fire as desired (example below fires every day at 6 am). This scripted input must run to ensure the Splunk lookups can enrich your SNMP data, so schedule it soon after the time you finish editing the inputs if you want to play right away!

[monitor:///var/www/cacti/log/mirage_poller_output.log*] disabled = false
index = cacti
sourcetype = cacti:mirage

[monitor:///var/www/cacti/log/cacti.log] disabled = false
index = cacti
sourcetype = cacti:system

[script://./bin/ /var/www/cacti]
source =
disabled = false
index = cacti
sourcetype = cacti:lookup:mirage
interval = * 6 * * *
#interval = 86400

Save the inputs.conf


Restart Splunk

cd /opt/splunk/bin

./splunk restart

Once Splunk restarts, and your configured cron has fired,  log into the Cacti Gui and confirm has run by examining the ‘Cacti Polling & Lookups Status‘ dashboard. If you do not see it, go back and alter the cron in and restart, or wait till it’s next run.

Screen Shot 2016-01-29 at 1.36.10 AMNow you can run the ‘Cacti lookup mirage build‘ outputlookup to enrich your SNMP eventsScreen Shot 2016-01-29 at 1.37.28 AMOnce built you can now search the Cacti Quick chart or use SPL to chart your data.

Screen Shot 2016-01-29 at 1.40.23 AMScreen Shot 2016-01-29 at 1.49.56 AM

For a quick browse of what you have ingested try:

eventtype=cacti:mirage host=n00bserver | top limit=0 name_cache

Screen Shot 2016-01-30 at 5.35.59 PM

Important Fields:

  • ldi (local_data_id) – This is the unique data souce id in Cacti
  • rrdn (rrd_name) – The type of KPI/measurement in the rrd
  • rrdv (rrd_value) – The value the poller was writing to the rrd.
  • name_cache – Cacti Data Source Name/Name of the graph
  • data_source_type_id – 1 = Gauge, 2= Counter
  • host – The Cacti Server
  • hostname – The device Cacti is querying
  • IP – IP address of the hostname

For those who may want to play with the local_data_id lookup/db mappings:


Sample Splunk search for graphing cpu on a server

eventtype=cacti:mirage rrdn=cpu hostname=n00bserver
| timechart max(rrdv) by name_cache

You now have the ability to utilize Cacti collected SNMP data in Splunk!

Stay tuned for how to monitor this data with Splunk Machine Learning!

Cacti Pi – How To Install Cacti Spine on Raspberry Pi

In this post we will continue optimizing Cacti with How to install Cacti Spine on Raspberry Pi.

As per
“Spine, formerly Cactid, is a poller for Cacti that primarily strives to be as fast as possible. For this reason it is written in native C, makes use of POSIX threads, and is linked directly against the net-snmp library for minimal SNMP polling overhead.”

Generally, Spine is used in large Cacti installs, where many devices need to be polled in under 300 seconds. When you expand the amount of devices you are monitoring the polling cycle gets longer and when using php.cmd it can get close to the 300 second window quite fast. Enter Spine.

In our case although I am monitoring a small network, we want to make sure the polling is completed as quickly as possible to ensure we return our precious CPU back to the Raspberry Pi for other processes.

Continue reading Cacti Pi – How To Install Cacti Spine on Raspberry Pi

Cacti Pi – Optimized

After successfully installing Cacti on my RPi and letting it poll for a few cycles with the default local host device setup, it was apparent that it was a little sluggish. Nothing terribly bad, but obvious when compared to my other deployments, which is understandable considering RPi’s specs.

At first I considered perhaps I had made a mistake not going with lighttpd over apache2, however I quickly found another optimization option….

This tutorial includes some interesting info on reclaiming some RAM reserved for graphics processing…which RPi as a server really doesn’t require.

Continue reading Cacti Pi – Optimized

How To Create Raspberry Pi SD card via Linux Command Line

This tutorial will walk you through how to create Raspberry Pi SD card via Linux command line (Ubuntu 12.04 LTS)

Instructions for creating the SD card on Windows or Mac are found here

Download desired image via torrent or direct download and place in your Downloads directory:

Continue reading How To Create Raspberry Pi SD card via Linux Command Line

Cacti Pi – How to Install Cacti on Raspberry Pi


UPDATE: I am revisiting this article as it is a few years old now. Currently working through updated steps on Ubuntu 16.04LTS.

Hint, you will need php5.6, not php7.0, as it seems Cacti is broken on php7.0


How to install Cacti on Raspberry Pi running Raspbian Wheezy

apt-get update
apt-get install apache2
apt-get install php5
apt-get install mysql-client mysql-server 

*you will be prompted to set a password for the mysql root user. take note of this as you will need it later!*

apt-get install php5-mysql php5-snmp rrdtool snmp snmpd

Run the following command to confirm required php modules are present:

php -m

mysql (For configuration, see note below)
SNMP (For configuration, see note below)
LDAP (Required only when using LDAP authentication)
GD (Required only for some Plugins)

Continue reading Cacti Pi – How to Install Cacti on Raspberry Pi

Cacti Pi – Correcting Cacti Timezone

Correcting Cacti Timezone

Battled a super annoying issue with what appeared to be Cacti not displaying graphs, which after half a day of messing around ended up being a timezone setting…..grrrrrrrrrrrr

After discovering Cacti was indeed graphing my expected values, just 4 hours ahead of my local time (EDT, America/Toronto) I realized cacti.log was showing UTC timestamps.

After confirming my Linux system time:

pi@raspberrypi ~ $ date
Fri Aug 17 04:39:13 EDT 2012

I dug around on and found instructions on updating php.ini with timezone values:

Continue reading Cacti Pi – Correcting Cacti Timezone