diskstat.pl
Author | Wim Nelis |
---|---|
Compatibility | Xymon 4.2 |
Requirements | Perl, Linux |
Download | None |
Last Update | 2019-02-12 |
Description
Script diskstat.pl is a client-side script for servers running Linux, which extracts disk I/O performance parameters from pseudo-file /proc/diskstats and reports those parameters to Xymon. It results in four graphs per monitored disk, showing the I/O request rate, the I/O throughput, the time needed per request and I/O queue length.
As most of the parameters in /proc/diskstats are ever-increasing counters, the reported values are effectively the average value since the previous invocation of this script. Typically, this script will be invoked once every 5 minutes, thus most results will be 5-minute averages. In other words, the results of this script cover the complete time the script is running. The results are thus not a periodic, statistical sample, each sample covering only a fraction of the time since the previous sample.
Script diskstat.pl is installed together with perl module diskstat.pm on the Xymon client. Script diskstat.pl sends all performance data of the selected disks and partitions to Xymon. The selection is defined in module diskstat.pm in hash %Disks. This list is also used to map the disk or partition name onto another name, typically the mount point. The data is sent to Xymon in Devmon format, which requires less configuration compared with the NCV format. At the generation of a graph, a custom script is invoked to extract mountpoint from the name of the RRD and rebuild the original name.
Installation
At the client side, script diskstat.pl and module diskstat.pm need to be installed and one configuration file needs to be modified. At the server side, one script needs to be installed and four configuration files of Xymon need to be modified.
Client side
Copy script diskstat.pl and module diskstat.pm to the server to be monitored, typically to subdirectory ~xymon/client/ext or /usr/lib/xymn/client/ext. Make sure that script diskstat.pl can be executed by user xymon. Enter the name of the directory in the 'use lib' directive in script diskstat.pl, at line 17. Define in table %Disks in module diskstat.pm the disks and partitions to monitor, and the name of the mount-point of each.
Edit the following section to reflect your environment and add it to ~xymon/client/etc/tasks.cfg or put it as a separate file named diskstat.cfg in subdirectory /usr/lib/xymon/client/etc/clientlaunch.d:
[diskstat] ENVFILE /path/to/xymon/client/etc/xymonserver.cfg CMD /path/to/xymon/client/ext/diskstat.pl LOGFILE /path/to/xymon/client/logs/diskstat.log INTERVAL 5m
Server side
At the Xymon server, it must be configured that this test uses the Devmon format. Therefore add the following two lines to ~xymon/server/etc/xymonserver.cfg or write them in a file named ~xymon/server/etc/xymonserver.d/diskstat.cfg:
TEST2RRD+=",diskstat=devmon" GRAPHS+=",diskstat::1,diskstat0::1,diskstat1::1,diskstat2::1"
The results are displayed in 4 graphs, named [diskstat], [diskstat0], [diskstat1] and [diskstat2]. These graphs are multi-graphs and use script genlgt.pl to generate the graph title.
Copy script genlgt.pl to ~xymon/server/ext.
Modify the directory names in the TITLE directives in the following graph definitions and add the result to ~xymon/server/etc/graphs.cfg.
[diskstat0] TITLE exec:/path/to/xymon/server/ext/genlgt.pl YAXIS Throughput [B/s] FNPATTERN ^diskstat\.(.+?)\.rrd$ DEF:RdAm@RRDIDX@=@RRDFN@:RdAmount:AVERAGE DEF:WrAm@RRDIDX@=@RRDFN@:WrAmount:AVERAGE LINE1:RdAm@RRDIDX@#@COLOR@:Read GPRINT:RdAm@RRDIDX@:MIN: Min\: %5.1lf %sB/s GPRINT:RdAm@RRDIDX@:MAX:Max\: %5.1lf %sB/s GPRINT:RdAm@RRDIDX@:AVERAGE:Avg\: %5.1lf %sB/s GPRINT:RdAm@RRDIDX@:LAST:Cur\: %5.1lf %sB/s\n LINE1:WrAm@RRDIDX@#@COLOR@:Write GPRINT:WrAm@RRDIDX@:MIN:Min\: %5.1lf %sB/s GPRINT:WrAm@RRDIDX@:MAX:Max\: %5.1lf %sB/s GPRINT:WrAm@RRDIDX@:AVERAGE:Avg\: %5.1lf %sB/s GPRINT:WrAm@RRDIDX@:LAST:Cur\: %5.1lf %sB/s\n [diskstat1] TITLE exec:/path/to/xymon/server/ext/genlgt.pl YAXIS Time per request [s/r] FNPATTERN ^diskstat\.(.+?)\.rrd$ DEF:RdTim@RRDIDX@=@RRDFN@:RdTime:AVERAGE DEF:RdSys@RRDIDX@=@RRDFN@:RdRequest:AVERAGE DEF:RdMrg@RRDIDX@=@RRDFN@:RdMerge:AVERAGE DEF:WrTim@RRDIDX@=@RRDFN@:WrTime:AVERAGE DEF:WrSys@RRDIDX@=@RRDFN@:WrRequest:AVERAGE DEF:WrMrg@RRDIDX@=@RRDFN@:WrMerge:AVERAGE CDEF:RdTm@RRDIDX@=RdTim@RRDIDX@,1000,/,RdSys@RRDIDX@,RdMrg@RRDIDX@,+,/ CDEF:WrTm@RRDIDX@=WrTim@RRDIDX@,1000,/,WrSys@RRDIDX@,WrMrg@RRDIDX@,+,/ LINE1:RdTm@RRDIDX@#@COLOR@:Read GPRINT:RdTm@RRDIDX@:MIN: Min\: %5.1lf %ss/r GPRINT:RdTm@RRDIDX@:MAX:Max\: %5.1lf %ss/r GPRINT:RdTm@RRDIDX@:AVERAGE:Avg\: %5.1lf %ss/r GPRINT:RdTm@RRDIDX@:LAST:Cur\: %5.1lf %ss/r\n LINE1:WrTm@RRDIDX@#@COLOR@:Write GPRINT:WrTm@RRDIDX@:MIN:Min\: %5.1lf %ss/r GPRINT:WrTm@RRDIDX@:MAX:Max\: %5.1lf %ss/r GPRINT:WrTm@RRDIDX@:AVERAGE:Avg\: %5.1lf %ss/r GPRINT:WrTm@RRDIDX@:LAST:Cur\: %5.1lf %ss/r\n [diskstat2] TITLE exec:/path/to/xymon/server/ext/genlgt.pl YAXIS Queue length [] FNPATTERN ^diskstat\.(.+?)\.rrd$ -l 0 DEF:IwTim@RRDIDX@=@RRDFN@:IoWTime:AVERAGE CDEF:Aql@RRDIDX@=IwTim@RRDIDX@,1000,/ LINE1:Aql@RRDIDX@#@COLOR@:Queue length GPRINT:Aql@RRDIDX@:MIN:Min\: %5.1lf %s GPRINT:Aql@RRDIDX@:MAX: Max\: %5.1lf %s GPRINT:Aql@RRDIDX@:AVERAGE: Avg\: %5.1lf %s GPRINT:Aql@RRDIDX@:LAST: Cur\: %5.1lf %s\n [diskstat] TITLE exec:/path/to/xymon/server/ext/genlgt.pl YAXIS Request rate [r/s] FNPATTERN ^diskstat\.(.+?)\.rrd$ DEF:RdRq@RRDIDX@=@RRDFN@:RdRequest:AVERAGE DEF:RdMr@RRDIDX@=@RRDFN@:RdMerge:AVERAGE DEF:WrRq@RRDIDX@=@RRDFN@:WrRequest:AVERAGE DEF:WrMr@RRDIDX@=@RRDFN@:WrMerge:AVERAGE LINE1:RdRq@RRDIDX@#@COLOR@:Read GPRINT:RdRq@RRDIDX@:MIN: Min\: %5.1lf %sr/s GPRINT:RdRq@RRDIDX@:MAX:Max\: %5.1lf %sr/s GPRINT:RdRq@RRDIDX@:AVERAGE:Avg\: %5.1lf %sr/s GPRINT:RdRq@RRDIDX@:LAST:Cur\: %5.1lf %sr/s\n LINE1:WrRq@RRDIDX@#@COLOR@:Write GPRINT:WrRq@RRDIDX@:MIN: Min\: %5.1lf %sr/s GPRINT:WrRq@RRDIDX@:MAX:Max\: %5.1lf %sr/s GPRINT:WrRq@RRDIDX@:AVERAGE:Avg\: %5.1lf %sr/s GPRINT:WrRq@RRDIDX@:LAST:Cur\: %5.1lf %sr/s\n LINE1:RdMr@RRDIDX@#@COLOR@:RdMerge GPRINT:RdMr@RRDIDX@:MIN:Min\: %5.1lf %sr/s GPRINT:RdMr@RRDIDX@:MAX:Max\: %5.1lf %sr/s GPRINT:RdMr@RRDIDX@:AVERAGE:Avg\: %5.1lf %sr/s GPRINT:RdMr@RRDIDX@:LAST:Cur\: %5.1lf %sr/s\n LINE1:WrMr@RRDIDX@#@COLOR@:WrMerge GPRINT:WrMr@RRDIDX@:MIN:Min\: %5.1lf %sr/s GPRINT:WrMr@RRDIDX@:MAX:Max\: %5.1lf %sr/s GPRINT:WrMr@RRDIDX@:AVERAGE:Avg\: %5.1lf %sr/s GPRINT:WrMr@RRDIDX@:LAST:Cur\: %5.1lf %sr/s\n
Define the diskstat graphs to be multi-graphs with the following modification in ~xymon/server/etc/cgioptions.cfg:
CGI_SVC_OPTS="... --multigraphs=diskstat,diskstat0,diskstat1,diskstat2"
Finally, define that all diskstat graphs should be shown in the 'trends' column of the monitored host by adding the following directive in ~xymon/server/etc/hosts.cfg:
<Ip> <Host> # ... TRENDS:*,diskstat:diskstat|diskstat0|diskstat1|diskstat2
Source
diskstat.pl
diskstat.pm
Bugs
The combination of multigraphs and sub-graphs do not work well in Xymon. This problem becomes visible if you monitor more than one disk or partition. In column 'diskstat' there will be one graph per monitored partition. However, in column 'trends' the data of multiple partitions is shown in one graph.
To do
An option is to rework this script to send a trends-message to Xymon in stead of a status message. The lack of a column named 'diskstat' might be an advantage for some.
Changelog
- 2012-04-02
- Initial release
- 2019-02-12
- Use Devmon format to pass the statistics to xymon / RRD in stead of the NCV format.