Cacti unter Debian Linux

Aus Laub-Home.de Wiki
Zur Navigation springen Zur Suche springen

Dies ist eine Sammlung von einigen Cacti Tools, Extensions und vor allem deren Installationen unter Debian bzw. Ubuntu Linux.

Installation Cacti

Als erstes wird das Cacti Paket installiert:

aptitude install cacti cacti-spine

bei der Installation wird man gefragt ob man via dbconfig die Datenbank konfigurieren will. Hier einfach mit ja bestätigen und den weiteren Anweisungen folgen.
Nach der Installation sollte unter folgender URL Cacti verfügbar sein:

hier als User admin ohne Passwort einloggen und danach das gewünschte admin Passwort setzten. Bei der nächsten Seite sollte alles mit OK gekennzeichnet sein, ansonsten bitte dementsprechend ändern oder Software nachinstallieren.
Nun sollte die Cacti Oberfläche zu sehen sein und der localhost mit ein paar Standard Grafiken unter Graphs zu sehen sein.

SNMP Konfigurieren

Wie man SNMP installaiert und konfiguriert, könnt ihr hier nachlesen:

Konfiguration Cacti für Linux Server

Die Konfiguration eines Linux Servers geht mit folgendem Template recht einfach. Zuerst folgendeDatei herunterladen und das Template cacti_host_template__linux_host_ucdnetsnmp.xml importieren.

Nun unter Device ein neues Device anlegen und den Linux Host als Vorlage auswählen.

Apache Stats im Cacti

Will man Apache Statistiken im Cacti haben muss als allererstes auf dem Webserver der ExtendedStatus aktiviert werden:
/etc/apache2/mods-available/status.conf

<IfModule mod_status.c>
#
# Allow server status reports generated by mod_status,
# with the URL of http://servername/server-status
# Uncomment and change the ".example.com" to allow
# access from other hosts.
#

ExtendedStatus On

<Location /server-status>
    SetHandler server-status
    Order deny,allow
    Deny from all
    Allow from localhost ip6-localhost
#    Allow from .example.com
</Location>
a2enmod status
/etc/init.d/apache2 restart

testen:

apache2ctl fullstatus

Als nächstes muss dieser Full Status ins SNMP eingespeist werden, das geschieht mit dem apache-stats Python Skript von Glen Pitt-Pladdy.

Dieses Skript unter /etc/snmp/ ablegen und folgende Rechte vergeben:

chmod +x /etc/snmp/apache-stats

Für das Tool muss folgender Ordner mit folgenden Rechten angelegt werden:

mkdir -p /var/local/snmp/cache
chown snmp: /var/local/snmp/cache

Abhängigkeit des Skriptes installieren:

aptitude install python-urlgrabber

apache-stats testen:

/etc/snmp/apache-stats

und im snmp aktivieren, dazu folgende Zeile folgender Datei anhängen:
/etc/snmp/snmpd.conf

extend apache /etc/snmp/apache-stats

Nun SNMP neustarten:

/etc/init.d/snmpd restart

Zu guter Letzt noch das Cacti selbst mit diesem Template bestücken und dem Server die beinhalteten Grafiken hinzufügen:

Achtung bei Debian Squeeze

Folgende Fehler kamen im cacti Log:

02/09/2011 11:25:05 AM - CMDPHP: Poller[0] Host[23] DS[638] WARNING: Result from SNMP not valid. Partial Result: U

02/09/2011 11:25:05 AM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'212.21.x.x', and OID:'NET-SNMP-EXTEND-MIB::nsExtendOutLine."apache".13'

Wenn man Debian Squeeze einsetzt muss das Paket snmp-mibs-downloader aus dem Non-Free Repository nachinstalliert werden. Hierzu als erstes das Repository in der Datei /etc/apt/sources.list aktivieren:

deb http://ftp.de.debian.org/debian/ squeeze non-free
deb-src http://ftp.de.debian.org/debian/ squeeze non-free

danach die Paketliste aktualisieren und das Paket nachinstallieren:

aptitude update
aptitude install snmp-mibs-downloader

Nun die Mibs noch aktivieren:
/etc/snmp/snmp.conf

#
# As the snmp packages come without MIB files due to license reasons, loading
# of MIBs is disabled by default. If you added the MIBs you can reenable
# loaging them by commenting out the following line.
#mibs :

und schon sollte alles gehen :-)

Monitoring Device IO

Um diese Extension zu nutzen muss man als erstes prüfen ob auf der Client Seite im SNMP das snmp-diskio-module läuft. Dies geht mit folgendem Befehl:

snmpwalk -v1 -c COMMUNITYNAME  HOST-IP .1.3.6.1.4.1.2021.13.15.1.1.1

es sollte ungefähr zu folgendem Output kommen:

iso.3.6.1.4.1.2021.13.15.1.1.1.1 = INTEGER: 1
iso.3.6.1.4.1.2021.13.15.1.1.1.2 = INTEGER: 2
iso.3.6.1.4.1.2021.13.15.1.1.1.3 = INTEGER: 3
iso.3.6.1.4.1.2021.13.15.1.1.1.4 = INTEGER: 4
iso.3.6.1.4.1.2021.13.15.1.1.1.5 = INTEGER: 5
iso.3.6.1.4.1.2021.13.15.1.1.1.6 = INTEGER: 6
iso.3.6.1.4.1.2021.13.15.1.1.1.7 = INTEGER: 7
iso.3.6.1.4.1.2021.13.15.1.1.1.8 = INTEGER: 8
iso.3.6.1.4.1.2021.13.15.1.1.1.9 = INTEGER: 9

Unter Ubuntu und RedHat Linux sollte dies ohne weiteres Doing funktionieren. So, nun wird das folgende TAR.GZ benötigt:

In diesem Archiv gibt es zwei Files:

  • net-snmp_devio.xml
  • cacti_data_query_ucdnet_device_io.xml

Die Datei net-snmp_devio.xml sollte in den folgenden Ordner des Cacti gelegt werden: /usr/share/cacti/site/resource/snmp_queries/ (Ubuntu Standard) oder wo auch immer die cacti xml-query-files bei euch liegen. Die Datei cacti_data_query_ucdnet_device_io.xml muss dann via Cacti Webinterface importiert werden. Ist die beides Erfolgt, kann man den neuen Monitor unter den Devices --> Associated Data Queries hinzufügen und dann bei Create New Graphs die entsprechenden Devices zum Monitoren auswählen. Mehr Infos hier:

Monitoring CheckPoint Firewalls

In diesem Archiv gibt es zwei Files:

  • cacti087d_host_template_firewall_-_checkpoint.xml (Als Template Importieren)
  • checkpoint_fwIfTable.xml (In den xml-query-files Ordner legen)

Monitoring IRONPORT Mail Gateways

  • Cacti-IronPort.zip
  • cacti_host_template_ironport_-_mail_appliance.xml (Als Template Importieren)
  • Ironport-Cacti-Template.xml (Als Template Importieren)
  • ironport_mail_appliance.xml (In den xml-query-files Ordner legen)

Monitoring JAVA JVM

# Enable Java SNMP
CATALINA_OPTS="${CATALINA_OPTS} -Dcom.sun.management.snmp.port=10002 -Dcom.sun.management.snmp.acl=/etc/tomcat6-test/tomcat-snmp.acl -Dcom.sun.management.snmp.interface=127.0.0.1"

Monitoring MySQL

Monitoring ESX/ESXi in Cacti

Original Post von ldjones48 im Cacti Forum, besser kann ich es auch nicht erklären:
Noch zur Info, der <cacti_path> bei Debian ist /usr/share/cacti/site/

Hi Guys.

This is a long one, it took me around three to four days of searching the web, understanding how the cacti could interact with the vCentre server and poll the VM’s for the data. I then had to figure out how cacti could take this data and graph it.

The main thing to keep in mind is that cacti normally uses SNMP to retrieve data from remote hosts and graph them. You have to make a para-dime shift when thinking about data retrieval from the vCentre servers in that it uses SSL connections over HTTPS.

To start with you need to download the following zip file (scripts) which contains the following two files

esxiograph.sh
check_esx3.pl

These files are the core of how the data is retrieved from the vCentre infrastructure. Firstly the check_esx3.pl pearl script will be called from the esxiograph.sh script using the details specified in a certain format.

These two files will need to be uploaded to your cacti server and placed in the folder “<cacti_path>/scripts” mine was in “/var/www/cacti/scripts/”. Once they have been uploaded you will need to edit the esxiograph.sh to change where it looks for the check_esx3.pl.

If you scroll down to the bottom you will see a section referencing the check_esx3.pl script, this needs to be changed to where we just placed the file as this was written for another program.

<cacti_path>/scripts/esxiograph.sh

io_vm)
type=io_vm
io_vm_all=`perl <cacti_path>/scripts/check_esx3.pl -H $2 -N $3 -u $4 -p $5 -l IO`
check_io_vm
;;
cpu_vm)
type=cpu_vm
cpu_vm_all=`perl <cacti_path>/scripts/check_esx3.pl -H $2 -N $3 -u $4 -p $5 -l CPU`
check_cpu_vm
;;
mem_vm)
type=mem_vm
mem_vm_all=`perl <cacti_path>/scripts/check_esx3.pl-H $2 -N $3 -u $4 -p $5 -l MEM`
check_mem_vm
;;
net_vm)
type=net_vm
net_vm_all=`perl <cacti_path>/scripts/check_esx3.pl -H $2 -N $3 -u $4 -p $5 -l NET`
check_net_vm
;;
io_vs)
type=io_vs
io_vs_all=`perl <cacti_path>/scripts/check_esx3.pl -H $2 -u $3 -p $4 -l IO`
check_io_vs
;;
cpu_vs)
type=cpu_vs
cpu_vs_all=`perl <cacti_path>/scripts/check_esx3.pl -H $2 -u $3 -p $4 -l CPU`
check_cpu_vs
;;
mem_vs)
type=mem_vs
mem_vs_all=`perl <cacti_path>/scripts/check_esx3.pl -H $2 -u $3 -p $4 -l MEM`
check_mem_vs
;;
net_vs)
type=net_vs
net_vs_all=`perl <cacti_path>/scripts/check_esx3.pl -H $2 -u $3 -p $4 -l NET`
check_net_vs
;;	

Next we need the VMware vSphere SDK for Perl:


Once the SDK kit has been installed, you will need to create a ‘readonly’ user account on your vSphere system to enable the cacti server to login and retrieve the data. To do this, login to your vCentre server or use the vSphere client to login to the control panel. Once here click on the vcentre server top level and click on permissions. In here you can add your user. If you are in a Windows domain you can add the user in active directory and then add it here. When adding the user only set the permission level to ‘readonly’. Make sure that propagate is ticked so this permission is inherited by all child nodes. On the other hand you could add the permissions on each individual server / host you wish to allow readings from.

Once you have created your user you can use command line code to test that the cacti server can communicate with the vCentre server. This is done with the following commands.

First of all we will test with the check_esx.pl script to ensure that the SDK kit can login with the details and pull the required data.

perl <cacti_path>/scripts/check_esx3.pl -H <vcentre server> -N <name of virtual machine> -u <username> -p “<password>” -l NET

There are several things to remember here, you can either poll the vCentre server or the hyper-visors directly. I have chosen to go with the vCentre server in my environment due to the fact that DRS may start to move machines around for level distribution. This would cause issues later on with graphing if I am directly polling the hyper-visors. On the other hand if you do not have a vCentre server and only hyper-visors this would be perfect. Please also not that the passwords need to be in quotations.

Moving on you will next need to test that the ./esxiograph.sh can parse the data retrieved from the check_esx3.pl and output it in a format that is easily readable by cacti’s graphing system. Please be aware before you test the next step you need to ensure that ‘bc’ is installed on the nagios server. This caused me several hours of headache not knowing why the calculation was not working. Please first install ‘bc’ with the following command.

aptitude install bc

Once this is installed you can go ahead and test the script with the following command

chmod +x /<cacti_path>/scripts/esxiograph.sh
/<cacti_path>/scripts/esxiograph.sh net_vm <vcentre> <vm name> <username> <”password”>

This should be all you need to do in the CLI side of the installation, next we need to import a template into cacti via the web GUI to allow us to start graphing the data collected.

First of all you need to download the following cacti_host_template_vmware_esxi (attached to this post) file. Extract this to your desktop and then login to your cacti server via the web gui. Once logged in goto the console tab at the top and then goto Import Template under the Import/Export heading. Click choose file and then navigate to the extracted file on your desktop. Then click import.

Once this has imported you will need to add a device. Under the console tab again, goto devices and then add device in the top right. Fill out the description and hostname. From the dropdown menu under host template select ‘VMWare esxi’. You will need to set the downed device detection to PING and make sure all firewalls in between your cacti server and vcentre server allow ping from the cacti IP otherwise the graphs will never be created. You can ignore all other fields and click create.

Once the device has been added goto the console tab and then under heading ‘Create’ click new graph, select your vcentre as the host and you will see 8 options to choose from. The options that start with VM_ are referring to the virtual machines VS_ are referring to the virtual host server (hyper-visor). For this guide I am only concerned with ‘VM Net Load’ select this and click create. You will now have to fill out the options as you did CLI stylie. Populate the vm name (case sensitive), username, and password. Please note the password will need to be in quotation marks such as “*nyy7^%ujl” otherwise it will not work. Click create and the graph will be created.

If after a while you have no graph and receive a error from the RRD tool, I found that the Graph Template for ‘ESXi – VM Net Load’ was using CF Type Last as opposed to Average for the two data sources inbound and outbound. I believe I exported these templates with the changes made but if not you may want to check this.

Downloads

Quellen