This page will contain information about some of the code I've written over the years. Most of it be written in perl, and will have to do with system administration tasks.
2008-04-21 I've added a separate page for snippets of Perl code which I find myself using over and over again.
If you've seen this page before and are wondering why I added this to the top of the page rather than the bottom... I originally wrote these programs back in 1997, but I didn't think of adding them to the web site until 2007-08-05.
One of the most common things that a system administrator needs to do is write a script which needs to run with root privileges, but which needs to be run by a non-root process. Almost all *nix-like systems have the ability to mark an executable program with a "setuid bit", which tells the kernel that whenever that particular program executes, it should run under the userid which owns the executable rather than the userid which calls for the program to be run.
However, many systems do not allow this "setuid" behaviour to apply to scripts. In cases like this, it becomes necessary to write a small "wrapper" program, which is a compiled binary (and therefore gains the benefit of the "setuid" mechanism) that runs the script.
On systems like Linux and Solaris, even this doesn't always work. They support the notion of "effective" user and group IDs, as well as the "real" user and group IDs. A running process has the privileges of its "effective" userid, but if it calls an exec() function to start a new process, that new process runs under the "real" user and group ID of the parent process. And on these systems, a program running in a "setuid" situation runs with the setuid user (the owner of the executable file) as the "effective" user, and the original user (which started the command) as the "real" user.
The answer to this problem is simple- the wrapper program needs to set its "real" user and group IDs to be the same as the "effective" user and group IDs, before calling the exec() function.
The programs below do this. There are two versions of the program.
The first version passes the command line arguments from the wrapper to the child program. For example, if you build a wrapper "blah" which runs the command "blah.sh", then if you issue the command "blah abc xyz", it would in turn execute "blah.sh abc xyz".
The second version ignores any command line arguments, and instead uses a compiled-in list of arguments for the child program. This is useful in cases where you need a wrapper around a command, but you don't want users to be able to use your wrapper with any arbitrary arguments. For example if you were to build a wrapper "mycat" around "/bin/cat", users would be able to use a command like "mycat /etc/shadow" (or even worse, "mycat blah > /etc/shadow") to do things they wouldn't normally be able to do.
To compile the first version of the program, you need to set the value of the PROG macro. You can do this by editing the source, or by defining the value on the gcc command line. For example, to build a wrapper called "run-newmrh" which runs the real command "/var/qmail/bin/qmail-newmrh" (which ignores its command line parameters anyway, and is therefore safe to use with the first type of wrapper) you can use either of these methods:
To edit the source, find this block of lines... #ifndef PROG #error Please define PROG and try again. #endif ... and change them to look like this: #ifndef PROG #define PROG "/var/qmail/bin/qmail-newmrh" #endif Then compile it using the normal command: # gcc -o run-newmrh wrapper1.c |
To specify the program name on the gcc command line, compile it using
a command like this: # gcc -DPROG=\"/var/qmail/bin/qmail-newmrh\" -o run-newmrh wrapper1.c |
To compile the second version of the program, you must edit the source code and add the command you wish to run. For example, if you want to build a wrapper "restart-smtpd" which runs the real command "/usr/local/bin/svc -t /service/qmail-smtpd", you would do this:
To edit the source, find this block of lines...
/* you MUST set the command information in this next line */
char *my_args[] = { , NULL } ;
... and change them to look like this:
/* you MUST set the command information in this next line */
char *my_args[] = { "/usr/local/bin/svc" , "-t" ,
"/service/qmail-smtpd" , NULL } ;
Then compile it using the normal command:
# gcc -o run-newmrh wrapper2.c
Here are the links to download the files.
|
|
This is another program I wrote long ago, and just forgot to add it to this web site until 2007-11-08.
This is just your basic "hex dump" program... it reads the file named on the command line (or STDIN if no file is named) and prints out the individual bytes, in both hex and ASCII (with "." for non-printable ASCII characters.) I wrote it back in 1996 and haven't really looked at it since then- it may have bugs, but if so I haven't found them- and I've been using this program several times a day ever since I wrote it.
The output looks like this:
> echo 'Words are very unnecessary, they can only do harm.' | hdump 00000000 - 57 6F 72 64 73 20 61 72 - 65 20 76 65 72 79 20 75 :Words are very u: 00000010 - 6E 6E 65 63 65 73 73 61 - 72 79 2C 20 74 68 65 79 :nnecessary, they: 00000020 - 20 63 61 6E 20 6F 6E 6C - 79 20 64 6F 20 68 61 72 : can only do har: 00000030 - 6D 2E 0A - :m.. :
As you can see, very basic. There are probably hundreds of programs out there which do the same thing, I'm just adding it here in case you don't already have one. Enjoy.
File: | hdump.c |
Size: | 2,008 bytes |
Date: | 2007-11-08 16:40:55 +0000 |
MD5: | 2936b05bcbff2fd206cf44c982cef5ab |
SHA-1: | 4c6583fbb944fac1547d3c386631715f11a59134 |
RIPEMD-160: | bf0e1f7321b2115aabc86af46db792beab4f9a1c |
PGP Signature: | hdump.c.asc |
This module handles the process of sending messages to the syslog, or to simulate syslog output while testing a program. It also includes a function to log something and then immediately exit the program.
A program using this library needs the following line at the top:
use Logit ;
If the file is not stored in one of the standard locations (I store it in /usr/local/lib on my systems) you need to add a line like this one before the "use Logit" line.
use lib "/usr/local/lib" ;
Your program will then have two new functions available:
logit($) sends a string to the syslog or to standard out, subject to the options you may have set.
bad($;$) does the same thing as logit(), but after sending the message it exits the program by calling exit(), either using the second parameter or the value of $Logit::badrv.
Example:
This is one way to do it...
bad("Goodbye.",4) ;
This does the exact same thing:
$Logit::badrv = 4 ;
bad("Goodbye.") ;
Note that the only way to do an exit(0) is to set $Logit::badrv to 0, since a zero as a second parameter is interpreted as "no second parameter was entered".
Your program can control how the module "does its thing" by setting certain variables before calling the logit() or bad() functions.
$Logit::fac (default "user") sets the syslog "facility" you wish the messages to use. This should be a string version of the facility name you wish- I have tested it using "user" and "local0" through "local7", although others like "mail", "news", and "daemon" can also be used if needed.
$Logit::pri_logit (default "info") sets the "priority" for messages sent with logit(). The use of priority codes allows syslogd to selectively filter which messages do and don't get written to the log files. Values in order of priority are: "emerg", "alert", "crit", "err", "warning", "notice", "info", and "debug". Note that the "emerg" priority code usually means a system-threatening disaster (such as "out of disk space" or "kernel bug detected") and messages sent with this priority are usually echoed to the physical console and to every login session on the machine.
$Logit::pri_err (default "err") sets the priority for messages sent with the bad() function. The same codes are available, and the same warning about "emerg" applies.
$Logit::showpid (default 1) tells whether or not to include the process id in the entry sent to the syslog system (or to stdout, if you are testing.) This is generally a good thing to do, especially if you are testing something that will run in the background or may have multiple instances running at once.
$Logit::badrv (default 0) sets the value passed to exit() after the bad() function logs its message. This would normally be something other than zero, especially if another program will be checking the return value from your script.
$Logit::debug (default 0) is used for debugging Logit.pm itself. It causes the logit() and bad() functions to print information about what they're doing, regardless of whether the logs are going to the live syslog or to stdout (see below.)
$Logit::stdout (default 0) will cause logit() and bad() to send their final messages to stdout instead of to the syslog system. This is useful when testing your script, to make sure the log output looks the way you expect it to.
$Logit::logger (default "") controls how Logit.pm works.
If this value is empty (the default), the logit() and bad() functions will use the services of the Sys::Syslog module (available through CPAN) to send the messages directly to the system's syslog channel. This is the fastest and least memory-intensive way to do it. If it works on your system, this is what you should use.
If this value is not an empty string, the logit() and bad() functions will call this as an external program to send the message to the syslog. The program should be "/usr/bin/logger" or wherever the equivalent program is on your system.
File: | Logit.pm |
Size: | 4,706 bytes |
MD5: | 0e117c4522645f9ec3b7fff4383f6a44 |
SHA-1: | 9ab7a54877fad71b17e81f8a379bf64d2e26fa4a |
RIPEMD-160: | 2ac23393834065e497faa361eda342b4696e50dd |
PGP Signature: | Logit.pm.asc |
This script reads a set of syslog files (i.e. /var/log/messages) and breaks it into one file per day. For example, it will read a file like /var/log/messages and produce files with names like these:
/var/log/messages.20030227
/var/log/messages.20030228
/var/log/messages.20030301
/var/log/messages.20030302
After breaking the files into individual dates, it can optionally back up the date files to a remote machine (using scp) or to a directory on the local machine.
The program also cleans up after itself. You can configure it to only keep a certain number of days' worth of each log file in the directory where the live log files live (usually "/var/log".) After processing each file, it will count how many old versions are there, and any which are older than that number will be deleted. This can keep your /var/log from filling up while still giving you immediate access to the most recent logs.
Since syslog files don't normally have the year on each line, the program assumes that the year for each line is "the current year" unless the line says December and the current month is January, in which case it assumes the year is "the current year minus one".
The program is configured using the variables in the "configuration" section at the beginning of the program.
$do_cut (default 1): While testing the program I had a set of pre-cut files ready for use in testing and didn't want to constantly shutdown and restart my syslogd, and possibly lose log information if there were a bug in the program. I set this to 0 while I was testing and the program skipped the entire "cut off logs" process. You may need this if you decide to play with the program yourself.
$restart (default "/etc/init.d/syslog restart") should be a command line which will cause your syslogd to shut down and restart. This is necessary in order to cut the log files correctly.
$cp (default "/bin/cp") should be the location of the "cp" binary on your system. This is used to backup the finished files to another directory on the same machine.
$scp (default "/usr/bin/scp") should be the location of the "scp" binary on your system. This is used to backup the finished files to another machine.
$bserver (default ""): If you wish to back up the finished log files to another machine, this variable should contain the name or the IP address of that system. If this is blank, no remote backup will be done.
$bport (default 22): For remote backups, this specifies the port number where sshd is listening. The standard port number is 22, but many systems are choosing to run their sshd on a non-standard port number so they won't be hassled with "port scans".
If this number is zero, the scp program will use whatever its default port number is. This will usually be 22, but can be changed in the /etc/ssh/ssh_options file.
$buserid (default ""): For remote backups, this specifies the userid to use when connecting to the remote machine.
$bkey (default ""): For remote backups, it is usually necessary to authenticate using an ssh key pair (unless the sshd is somehow configured to not make this necessary.) This variable should contain the full path to the PRIVATE key file. You should make sure that this key file allows you into the remote machine before trusting that this script will process the backups correctly.
$bdir (default ""): For remote backups, this specifies the directory on the remote machine where the files should be stored. If this value does not start with "/", the directory name will be interpreted as starting with the remote userid's home directory.
For backups to the local machine, this specifies which directory the files will be copied to. This should be an absolute pathname.
$Logit::fac (default "local2") controls what facility the program's log messages will use when sent to the syslog.
$Logit::stdout (default 0) controls where the log messages will be sent. Setting this to 1 will make the messages appear on your terminal, which can be handy if you're playing with this script and need to see what it's doing.
%logfiles contains a hash. The keys are the log filenames to be processed, and the value of each is the number of days' worth of that file to be left in the directory where the original file was found. If the value for a given file is zero, old files will not be deleted automatically.
This program uses Logit.pm (above) to log what it's doing.
2008-10-14 Found and fixed a typo which was causing the chown() and chmod() calls within the fh() function to not work correctly.
File: | cron.cutsyslog |
Size: | 6,881 bytes |
Date: | 2008-10-14 21:24:01 +0000 |
MD5: | d5bde53a1c36340df74666bc681b58bc |
SHA-1: | 17562911bb207f2fc955d3d08d26c13db0552c29 |
RIPEMD-160: | a3ce9992813c505b7f87407999a6e7993c962371 |
PGP Signature: | cron.cutsyslog.asc |
This is a perl script which shows the internal contents of a MIDI file. This web page tells more about it and has the download link.
A discussion developed on a programming mailing list involving algorithms for computing prime numbers. I wrote prime64.c as a test of using pointers rather than array indexes and integers instead of floating-point numbers, to see what the performance improvement would be. I then modified it a little, creating prime32.c which uses 32-bit integers instead of 64-bit integers.
Steve Litt, another member of the list (and the one who started the whole prime-generating thread) came up with another optimization. The idea is this- instead of checking every number, we can bypass the ones which we know are multiples of two or of three. I added this optimization to my program, producing prime32j.c. It does run a little faster, but the effect on the program wasn't as dramatic as I thought it would be.
|
|
Whenever I build a machine, for myself or for a client, I install an /etc/issue file which results in a blue banner across the top of the screen, which identifies the machine and the terminal line. For a long time I was editing this file by hand and making adjustments for different hostnames and screen widths, but then I decided to write the mkissue script to automate the process. It's written to be called when the system first boots, somewhere early in the boot process (before the "getty" processes are started) but can be re-run at any time without any problem (as long as when you run it, the terminal window is the same width as the machine's console.)
File: | mkissue |
Size: | 2,567 bytes |
MD5: | 351f54cb4a4da19ea4543c94b0441969 |
SHA-1: | 261dec61e8dca831545d9ffd4b7ca7a385a557b4 |
RIPEMD-160: | eca22cf4960a2f49a2271c29fadaddc0551775de |
PGP Signature: | mkissue.asc |
When writing scripts to do automated backups or rotate log files, a lot of times you accumulate old files in a certain directory, and if you don't clean them out every so often you end up filling up the disk and causing all kinds of other problems.
The delbut script deletes "all but" the newest versions of a file. For example, the command "delbut -10 blah*.log" would delete all "blah*.log" files EXCEPT the ten newest ones.
It's written so that you can test it by not including the "-" in front of the number. For example, "delbut 5 messages.*" would show you which files would be kept and which ones would be deleted, but would not actually delete any files. If you want to actually delete files, you need "delbut -5 messages.*" instead.
File: | delbut |
Size: | 2,184 bytes |
MD5: | a761945725f32869a3fb856dfabeff6c |
SHA-1: | 0131b3d9f048dee27c4ac2e834811366f816051e |
RIPEMD-160: | 2000ab1f1fb43300ce2235557c12e1dcf1821d0e |
PGP Signature: | delbut.asc |
Many perl modules have "broken" installers, which put the modules in the correct directories but don't make them world readable. This simple script fixes the permissions so that every file in the directories listed in perl's built-in @INC array is world readable.
This script has been mentioned on the mod_perl mailing list and on the LEAP mailing list (the linux users group here in Orlando.)
File: | pfix |
Size: | 1,285 bytes |
MD5: | d055b486a645ccfed122ec2b680b9fd5 |
SHA-1: | 6a3ee9ab5e201a219c480327b12ffa68f567abbd |
RIPEMD-160: | 65a83c874a77559ebc7fe7e3d1f5c202587dc4c6 |
PGP Signature: | pfix.asc |
I wrote this script because a client needed a way to allow a certain userid to upload files to one specific directory on his web site, but didn't want that userid to have access to the rest of his site. I used a program called scponly to create the userid so that the scp and sftp commands work but nothing else will, and it also locked the scp/sftp session into a certain directory by creating a chroot jail for the session.
The same approach can be used with a program like chrootssh to create a chroot jail. I went with scponly for this particular client because it didn't require any patches to openssh, and because I don't need an actual shell session to be jailed- just scp and sftp.
File: | mkjail |
Size: | 5,844 bytes |
MD5: | 6414f1ae3fbe2fb5e54989e9097318ca |
SHA-1: | 02a573db9e1eebb0227e17010731e0c64236762c |
RIPEMD-160: | e2db09da85f6a7f2b9079198f212935587c852f8 |
PGP Signature: | mkjail.asc |
One of the rules by which I write code is that I do not store any passwords in plain text on the server. This means that I have often had to manually figure out the encrypted version of a password.
On most systems, there are two types of password encryption- an old
method based on the crypt()
function, which only uses the
first eight bytes of the user's password, and a newer version which is
based on the MD5 hashing algorithm. I normally use MD5 because it's more
secure, but I do occasionally need to use the older algorithm.
Most of the scripts I've been writing over the past several years
have been written in perl. Perl has the crypt()
function
built in, and CPAN (an online archive of pre-written perl modules) has
the
Crypt::PasswdMD5 module.
Because I often need to manually figure out the MD5-password hash of a password (in order to set the initial passwords for programs and so forth) I have written a quick and dirty password encryptor, which will take a plain-text password from the command line and print out the encrypted version.
File: | epw |
Size: | 3,282 bytes |
MD5: | f4ecc1c52088839b427d21c09a5a6c27 |
SHA-1: | 001dc863d54b515c155cd50d4ba4f22ace0418f1 |
RIPEMD-160: | 8d0ec5269ab634775570796761c286ac0f15361f |
PGP Signature: | epw.asc |
To encrypt a password, run this script with the password on the command line. Of course if the password contains spaces, you should quote the password.
The script defaults to using MD5 and generating a random salt. If you
wish to use the old crypt()
-based encryption, use the
-u
switch. If you wish to manually enter a salt value, use
the -s
switch. Here are some examples:
$ epw "this is a test"
$1$prhcrEox$TTayv6uZjBGOap2VOnLmM0
$ epw -u "this is a test"
tiwXyWdpGHw.w
$ epw -s a1b2c3d4 "this is a test"
$1$a1b2c3d4$9NjQztUI7OAOK5QQVx5GO0
$ epw -u -s k9 "this is a test"
k9UU98ODfPyVA
Yes, I know that reading the plain-text password from
the command line is not the most secure thing in the world, and that an
attacker with access to the system could potentially see the plain-text
password by running a ps
command at the right time. I normally
run this on my desktop or laptop machine rather than on the server, so there are
no other users to worry about. If you are worried about it, feel free
to change the code and have it get the plain-text password from some other
source.
I have decided to start standardizing the look and feel where I offer files for download on this site, and on other sites. I have written a perl script to automatically generate the HTML that I need on the page, as well as generate and verify any PGP signatures I might need in order to offer people the chance to verify the files. The format looks like the block you see below.
File: | download-block |
Size: | 5,500 bytes |
Date: | 2008-08-07 22:59:08 +0000 |
MD5: | 14da60078f0a85bc1fd900f7fa412a6c |
SHA-1: | a37bfc095c0176df11f40b3a4ada5ff64e03d088 |
RIPEMD-160: | ac8862b4b01c2ad70c1a60128d8f830e019b0ce6 |
PGP Signature: | download-block.asc |
2007-09-11 Added the "Date" line to the output.
2007-10-04 Fixed the table's "style" attribute.
2008-08-07 Changed "flastmod" to use "virtual". Allows a download block to reference a file in a different part of the web site.
I will be converting the existing download links on the other pages to this new format over time.
The MD5 and SHA-1 checksums you see with these files may be checked using programs called md5sum and sha1sum, which are available on most machines. You can also use openssl to verify these checksums as well.
$ md5sum download-block
14da60078f0a85bc1fd900f7fa412a6c download-block
$ sha1sum download-block
a37bfc095c0176df11f40b3a4ada5ff64e03d088 download-block
$ openssl md5 download-block
MD5(download-block)= 14da60078f0a85bc1fd900f7fa412a6c
$ openssl sha1 download-block
SHA1(download-block)= a37bfc095c0176df11f40b3a4ada5ff64e03d088
$ openssl rmd160 download-block
RIPEMD160(download-block)= ac8862b4b01c2ad70c1a60128d8f830e019b0ce6
And of course you can use gpg to verify the PGP signature on a message, as long as you have my public key on your keyring.
$ gpg -v download-block.asc
gpg: armor header: Version: GnuPG v1.4.7 (Darwin)
gpg: assuming signed data in `download-block'
gpg: Signature made Thu Aug 7 19:00:27 2008 EDT using DSA key ID 9014AD1A
gpg: using classic trust model
gpg: Good signature from "John M. Simpson <jms1@jms1.net>"
gpg: binary signature, digest algorithm SHA1
This is a C program which reads the bytes from a "pad" file (which is presumably full of random binary data), does an "XOR" transformation against the bytes from "standard in", and writes the results to "standard out". If there are more "message" bytes than there are "pad" bytes, the program loops back around to the beginning of the "pad" and re-uses those same bytes.
This program could be used to "encrypt" files, or messages sent to other people, IF you have some secure method of sharing the "pad" file with them. This could be as simple as handing them a CD-ROM or USB memory stick with files full of random data, or it could be some incredibly complex method- but however you do it, the "pad" files should be protected as securely as the actual messages you transform them with.
One interesting property of using XOR as the transformation is that you can transform a given message with any number of pads, in any order, and then transform the "encoded" data with the same pads, again in any order, and still produce the original message. For example:
$ ./xorfile pad1 < input > work1
$ ./xorfile pad2 < work1 > work2
$ ./xorfile pad3 < work2 > work3
$ ./xorfile pad1 < work3 > work4
$ ./xorfile pad2 < work4 > work5
$ ./xorfile pad3 < work5 > work6
$ md5sum input work6
17d0c81a9f00c372e183d169eb64c8d2 input
17d0c81a9f00c372e183d169eb64c8d2 work6
I could go on about where to find good random numbers (I use /dev/urandom for testing, and I use the "Hotbits" service from Fourmilab in Switzerland to get "real" random data) and why you shouldn't use things like plain text, MP3, JPEG, or other binary files with a known format, as a pad. However, you can (and should) do your own homework and learn this stuff for yourself.
File: | xorfile.c |
Size: | 4,843 bytes |
MD5: | f9f1326d7d429f549d1ec46dfd6f41fb |
SHA-1: | d9fa3959f3c1c464a293f6d71de6dd6775c3bbd2 |
RIPEMD-160: | b6d425e96b35e2399e939c75326d34ded2f59a5b |
PGP Signature: | xorfile.c.asc |
As the example shows, the program needs the pad filename on the command line. It will read from "standard in", and write to "standard out". Of course, if you need something different, feel free to change it for your own purposes- that's what Open Source Software is all about.
I normally use MRTG to produce traffic graphs for clients' routers and switches, as well as to monitor things like CPU load and network usage for individual servers. And I figured that since I monitor several servers on a regular basis, it might be nice to have the page with the graphs emailed to me every day, so that I can watch the graphs on a regular basis without having to visit fifteen web pages.
My first thought was to just have the server email me the HTML file that MRTG generates. However, the <IMG> tags within the file refer to the images (which are a big part of what I need to see) as relative names rather than full URLs. So I tried adding the base URL to the SRC attribute for each tag, but I ran into two issues:
All but one of the MRTG sites I monitor are protected by passwords, which means the images aren't directly viewable within an HTML email message.
Most reasonably intelligent MUAs (email programs) won't render images from external web sites, for security reasons.
So my next idea was to include the images themselves as attachments with the message. However, neither of the MUAs that I use (Apple's "Mail.app" and Mozilla Thunderbird) would show the images "inline", that is, in the right places within the HTML message- either they didn't show the images at all, or they showed them one after the other at the end of the message.
So now I got to wondering how to make the images show up in the right places... and I figured, the spammers already know how to do this, that's how their "image spam" works. And my server's automatic spam filters keep copies of every message they report to Spamcop, and I'm sure there are several of them every day... so I looked in the "evidence locker", found an image spam, and looked at how they did their HTML and MIME structure.
Here's how it works:
The message itself has a "Content-Type: multipart/related" header.
The first section is the HTML text, with the usual "Content-Type: text/html" header.
Within the HTML, I found an IMG tag which looked like this:
<IMG SRC="cid:2d4de25ca7ea8edf3074c7ef74b03491" ... >
Further down, in the MIME headers for the attached image, I found this:
Content-Type: image/jpeg
Content-Transfer-Encoding: base64
Content-Disposition: inline; filename="filename.jpg"
Content-ID: <2d4de25ca7ea8edf3074c7ef74b03491>
So it appears that this "cid:" is a standard URL scheme, used to match inline images in email with the image files which are attached to the message. So I added this logic to the "email-mrtg" script I was writing, and sure enough, it works. No problems with external images, no inability to see the images because they're on a password-protected server.
My life just got easier. Isn't that what computers are meant for?
Before you can use the script, you must configure it. You will need four items:
Edit the script, using your text editor of choice. Near the top you will find these four lines:
my $dir = "/var/www/html/mrtg" ; my $url = "https://domain.xyz/mrtg" ; my $sender = "Admin <postmaster\@domain.xyz>" ; my $recip = "Admin <postmaster\@domain.xyz>" ;
Edit these lines to match the directories and URLs used on your server. Make sure that you have a backslash in front of the "@" sign in the email addresses, or Perl will throw an error message when you run it. The script also looks for the "domain.xyz" phony domain name, and throws an error message if it sees it (because if it's there, you obviously haven't configured the script yet.)
Once you have configured the script, you can run it with one or more filenames or URLs on the command line. It actually strips the directory and/or URL components from the beginning of the name, as well as the ".html" from the end, to get a "base name" which is expected to exist in the configured directory (i.e. you could specify any directory you wanted, and it still only looks in the configured directory for the .html and .png files.)
For example, on my own server, MRTG has targets called "zippy_e0" (for my eth0 traffic) and "zippy_la" (for my load average.) I have a cron job with runs the script every day- the crontab entry looks like this:
MAILTO="reports@domain.xyz"
You can probably guess what the real domain name is
1 0 * * * /usr/local/sbin/email-mrtg zippy_e0
zippy_la
And at 12:01 AM every day, my server sends me a snapshot of these two MRTG pages.
2007-09-11 I noticed that some email programs showed the correct images every day, but some email programs seemed to be "caching" the images somehow. Turns out that some email programs will show you the cached version of an image with the same "Content-ID" value if it finds one... so I updated the code, now the "Content-ID" values contain a timestamp so that if you have a week's worth of these messages in your inbox, your MUA (Mail User Agent, the program you run in order to read and write email) will show each message's images correctly.
File: | email-mrtg |
Size: | 6,048 bytes |
MD5: | 6473218d549bcdcdfd39f79bf00e399d |
SHA-1: | a98935f3ba277574573a8ca95b22a1c74a9a1c1b |
RIPEMD-160: | 3277692a4413d13e337428ef02c64eddf7e1fffc |
PGP Signature: | email-mrtg.asc |
In order for MRTG to monitor the load average and ethernet traffic on a server, you need a program which gathers the data and makes it available to MRTG. Normally, MRTG works by sending SNMP requests, and it's common to run an SNMP "agent" (a server process) on a server in order to allow MRTG to collect these statistics.
However, MRTG also supports a less complicated method of gathering data, which involves running an external script which prints out four pieces of information- an "in" counter, an "out" counter, the system's uptime, and the system's name or other identifier. It comes with a collection of these scripts (in the "contrib" directory) but I found it simpler and easier to just write my own.
The one thing to mention is that MRTG normally expects to work with counts which are integers. The "mrtg-load" script actually multiplies the load averages by 100 before sending them to MRTG (so a load average of 1.32 would be reported as 132), and the directives below cause MRTG to divide the values by 100 when generating the HTML and the labels on the graphs.
2007-09-15 Found and fixed a bug which caused mrtg-eth0 to sometimes miss the packet counts when reading /proc/net/dev.
|
|
To use the scripts, add lines like this to your "mrtg.cfg" file:
# Server xyz ethernet traffic Target[xyz_e0]: `/usr/local/sbin/mrtg-eth0` MaxBytes[xyz_e0]: 12500000 Title[xyz_e0]: xyz eth0 PageTop[xyz_e0]: <h1>xyz eth0</h1> Options[xyz_e0]: growright,nopercent,bits # Server xyz load average Target[xyz_la]: `/usr/local/sbin/mrtg-load` MaxBytes[xyz_la]: 10000 Title[xyz_la]: xyz Load Average PageTop[xyz_la]: <h1>xyz Load Average</h1> Factor[xyz_la]: 0.01 YTicsFactor[xyz_la]: 0.01 Options[xyz_la]: growright,nopercent,gauge YLegend[xyz_la]: Load Average ShortLegend[xyz_la]: Legend1[xyz_la]: 5-minute load average Legend2[xyz_la]: 15-minute load average Legend3[xyz_la]: Legend4[xyz_la]: LegendI[xyz_la]: 5min LegendO[xyz_la]: 15min
Somebody asked me how to take a patch file and break it into separate parts based on what file the change was modifying. I told them it would be a five-minute script... I was right, it only took about five minutes to write and test.
Run the script with the original patch as either standard in, or name the file on the command line (gotta love Perl's <> operator!) It will write out "split-filename.patch" for each file that the original patch modifies.
Note that this expects its input to be a "unified diff" file, that is a file produced by "diff" with the "-u" option. This is how I normally produce all of my own patches.
File: | patch-split |
Size: | 1,340 bytes |
MD5: | a2fc47a649b538436e0615dfbd51fcf7 |
SHA-1: | 97be68a369d0fad1723a86d3462910a19ad705b2 |
RIPEMD-160: | 66f3ad796a3fbcfe35b7010a43e7f7b86130fb2f |
PGP Signature: | patch-split.asc |
I wrote some functions which use ANSI control sequences to write a persistent "title" at the top of the screen, to show the progress of the script as it runs, if the script is running commands which generate a lot of scrolling output (i.e. compiling software.)
Rather than try to squeeze it all in here, I wrote a separate web page which explains the ANSI sequences, lists the ones I use to set and clear "titles", and shows a few actual scripts which use them.
Here's the link: Screen Titles
I was given two IEE PDK-0003 VFD (vacuum fluorescent display) units. These are two-line green dot matrix display units you would normally see on top of a pole, attached to the back of a cash register at a convenience store.
The units came with their own power supplies and a 9-pin serial connection to go back to the computer, and I was able to find information on the manufacturer's web site about the codes needed to program the unit- mostly standard ASCII, but they also have a set of one-byte control codes for things like moving the cursor, turning the cursor on and off, and so forth.
I'm using one as a clock, hanging from a shelf above the back of my desk, so I can glance up and see both the local time and GMT time (I'm also a ham radio operator and I log my contacts using GMT time.)
This is the perl script I'm using to generate the clock.
File: | vfdclock |
Size: | 3,003 bytes |
Date: | 2007-10-29 15:18:03 +0000 |
MD5: | f4d6c2a75d1c412206be6b59c928c080 |
SHA-1: | 40f308e2a35aa1a9e5fd1989e0f12c2a8460cb9d |
RIPEMD-160: | 07628af7672278ed947081cf7a99819c3e09cd99 |
PGP Signature: | vfdclock.asc |
This is a Perl library of functions that I use in many of my scripts. It contains functions to do many useful things with IP addresses. It uses a Perl "object" (a blessed reference to a hash) containing both the IP address and a netmask.
File: | IPaddr.pm |
Size: | 9,803 bytes |
Date: | 2007-11-12 01:12:09 +0000 |
MD5: | ee613a23e7c80fbd051f92a960bc7e04 |
SHA-1: | 78d65f2ae6dbe83ab2635e03e2a61f22177c46b6 |
RIPEMD-160: | 4edc5a2d1f83b25708a6c0dac53699a4e7ae5c4b |
PGP Signature: | IPaddr.pm.asc |
I've written a full web page about how I set up a machine at my house to maintain a mirror of my server, by using rsync with ssh to pull backups every few hours.
There have been several times I have wished I had a log of exactly what I had seen and/or done while ssh'd into a client's server, either for my own reference, or to show to a client. I remember a few years ago, reading about a way to do this by combining ssh with the "tee" command, but I didn't remember the exact details.
It turns out it's rather simple... the command looks like this:
$ ssh userid@server | tee logfile
The tee command works by copying every byte it receives on its "standard input" channel, to its "standard output" channel, but also writes a copy of it to a file. By piping ssh's output through it, any data received from the remote server is written to the file at the same time it's written to your screen.
In order to make the process a bit easier on myself, and to standardize where and how these log files are created, I wrote a script called "logssh". It works by examining its command line options to find the userid and server name where you want to connect, and combining that with a timestamp to create the logfile... and then exec()'ing the real "ssh ... | tee ..." command line for you.
File: | logssh |
Size: | 2,574 bytes |
Date: | 2008-07-17 01:25:48 +0000 |
MD5: | 54f38e175c04f29ac64c954c097e44d9 |
SHA-1: | d718f572d8abf95028be29f4e369b9700e1021fd |
RIPEMD-160: | 98e41327b8db050d85263b69ae4d6015de60ed80 |
PGP Signature: | logssh.asc |
DHCP option 119 allows DHCP servers to tell clients to use more than one DNS search domain. It is documented in RFC 3397.
The DHCP server embedded into MikroTik's RouterOS (which I use at home, at work, and at several clients' offices) doesn't have native support for DHCP option 119, aka the "domain-search" option. It does, however, have a way to set up arbitrary options, by specifying the code number (i.e. 119) and the raw value.
MikroTik's documentation explains how to create an arbitrary option like this, but it doesn't go into any specifics about what any particular option (such as 119) is supposed to actually contain. I tried a few different things and, while the option was being sent (I verified this using Wireshark), the value I was sending was not being accurately parsed as the list of domains I was trying to use (or as anything valid, for that matter.)
I did some more searching, and found this page which explained how to calculate the required value.
The value needed for this option needs to use "DNS encoding", and is required to use DNS compression, as documented in RFC 1035 section 4.1.4. This kind of "compression" can be tricky to do by hand, so I wrote a Perl script to do it for me.
File: | dhcp119 |
Size: | 5,849 bytes |
Date: | 2016-07-31 17:28:31 +0000 |
MD5: | f153d6ffc85b341b4ba2369eba89f1b0 |
SHA-1: | ba0fe679bb03f24565c299c32a365a9f5c76b749 |
SHA-256: | bdec6fc30946a815fa50991568431e5ec4de5db81b6f28346dc6f5b93b40111b |
RIPEMD-160: | 4fc81965ad83e9558f46b1cef9d593774167998b |
PGP Signature: | dhcp119.asc |
To build a domain-search string containing the domains "first.example.net" and "second.example.net" (i.e. the second example used on this page), run this command:
And then to use the value on a MikroTik, the command would be...
Another example, this one using the names "eng.apple.com" and "marketing.apple.com" (i.e. the example in RFC 3397 section 3), run this comand: