These days I’ve implemeted a Linux bridge that does traffic accounting and IP restrictions per mac adress and decided to share my experience. In my setup on eth0 is my gateway and on eth1 is switch with servers connected to it. First we need to bridge eth0 and eth1, you’ll need brctl command, to install it on Gentoo Linux run:


emerge bridge-utils

This commands bridge eth0 and eth1 interfaces, DO NOT run them if you’re connected to the box through eth0 or eth1. You may want to change brname to something more meaningfull:


/sbin/brctl addbr brname
/sbin/brctl setageing brname 86400
/sbin/brctl addif brname eth0
/sbin/brctl addif brname eth1
/sbin/ifconfig brname up

Now as you connect some computers on eth0 and eth1 they should see each other.

Now to limit IP addresses per mac address you will need ebtables /you should enable them in kernel configuration too/, to install it on Gentoo I use:


emerge ebtables

First lets create file /etc/ipmac.conf that will contain mac addresses and IP adresses assigned to it. The file is in format:


00:01:03:D6:63:EF 87.120.40.143

You can assign multiple IPs to one mac address with multiple lines with the same mac. Here is the script that implement this rules for me, remember in my setup servers are on eth1 and their gateway is on eth0:


#!/bin/bash

EBTABLES=/sbin/ebtables

echo Loading Rules

$EBTABLES -F

cat /etc/ipmac.conf | while read -r mac ip; do

echo "Allowing $ip from $mac"
$EBTABLES -A FORWARD -p IPv4 -i eth1 -o eth0 -s $mac --ip-src $ip -j ACCEPT
$EBTABLES -A FORWARD -p IPv4 -i eth0 -o eth1 -d $mac --ip-dst $ip -j ACCEPT
$EBTABLES -A FORWARD -p ARP -i eth1 -o eth0 --arp-opcode 2 --arp-ip-src $ip --arp-mac-src $mac -j ACCEPT
$EBTABLES -A FORWARD -p ARP -i eth0 -o eth1 --arp-opcode 2 --arp-ip-dst $ip --arp-mac-dst $mac -j ACCEPT

done;

$EBTABLES -A FORWARD -p ARP -i eth1 -o eth0 --arp-opcode ! 2 -j ACCEPT
$EBTABLES -A FORWARD -p ARP -i eth0 -o eth1 --arp-opcode ! 2 -j ACCEPT

$EBTABLES -A FORWARD -i eth1 -o eth0 -j DROP
$EBTABLES -A FORWARD -i eth0 -o eth1 -j DROP

save the script and execute it. Now your servers should be able to only assign IP addresses from /etc/ipmac.conf.

Now if you want to have MRTG graphs for every IP and every mac address you will need to have php and mrtg installed, on Gentoo just type:


emerge php mrtg

You will need some web server also to see serv the graphs, in my case it is apache, which home directory is /var/www/localhost/htdocs. You’ll need to create directory for mrtg:


mkdir /var/www/localhost/htdocs/mrtg

Now save this script somewhere and mark it executable:


#!/usr/bin/php
< ?

$data=`/sbin/ebtables -L --Lc --Lmac2`;

$ip=array();
$mac=array();

preg_match_all('#-p IPv4 -s ([0-9a-f\:]+) -i eth1 -o eth0 --ip-src ([0-9\.]+) -j ACCEPT , pcnt = \d+ -- bcnt = (\d+)#', $data, $m);
for ($i=0; $i<count($m[0]); $i++){
if (!isset($ip[$m[2][$i]]))
$ip[$m[2][$i]]=array('in' => 0, 'out' => 0);
if (!isset($mac[$m[1][$i]]))
$mac[$m[1][$i]]=array('in' => 0, 'out' => 0);
$ip[$m[2][$i]]['out']+=1*$m[3][$i];
$mac[$m[1][$i]]['out']+=1*$m[3][$i];
}

preg_match_all('#-p IPv4 -d ([0-9a-f\:]+) -i eth0 -o eth1 --ip-dst ([0-9\.]+) -j ACCEPT , pcnt = \d+ -- bcnt = (\d+)#', $data, $m);
for ($i=0; $i<count ($m[0]); $i++){
if (!isset($ip[$m[2][$i]]))
$ip[$m[2][$i]]=array('in' => 0, 'out' => 0);
if (!isset($mac[$m[1][$i]]))
$mac[$m[1][$i]]=array('in' => 0, 'out' => 0);
$ip[$m[2][$i]]['in']+=1*$m[3][$i];
$mac[$m[1][$i]]['in']+=1*$m[3][$i];
}

$cfg=fopen('/etc/mrtg/tr.cfg', 'w');
fwrite($cfg, "EnableIPv6: no
WorkDir: /var/www/localhost/htdocs/mrtg
Options[_]: bits,growright
");

foreach ($ip as $k => $v){
$fn="/var/www/localhost/htdocs/mrtg/data/ip-$k";
$f=fopen($fn, "w");
$v['in']=round($v['in']);
$v['out']=round($v['out']);
fwrite($f, "{$v[in]}\n{$v[out]}\n");
fclose($f);
fwrite($cfg, "

Target[localhost_ip$k]: `cat $fn`
SetEnv[localhost_ip$k]: MRTG_INT_IP=\"$k\" MRTG_INT_DESCR=\"$k\"
MaxBytes[localhost_ip$k]: 1250000
Title[localhost_ip$k]: ip$k -- btcgate
PageTop[localhost_ip$k]: <h1>ip$k -- btcgate</h1>
<table>
<tr><td>System:</td> <td>btcgate in \"Cherni vrah 66, Sofia 1407\"</td></tr>
<tr><td>Maintainer:</td> <td>anton@titov.net</td></tr>
<tr><td>Description:</td><td>ip $k</td></tr>
<tr><td>ifType:</td> <td>ethernetCsmacd (6)</td></tr>
<tr><td>ifName:</td> <td></td></tr>
<tr><td>Max Speed:</td> <td>10.0 Mbits/s</td></tr>
</table>

");
}

foreach ($mac as $k => $v){
$fn="/var/www/localhost/htdocs/mrtg/data/mac-$k";
$f=fopen($fn, "w");
$v['in']=round($v['in']);
$v['out']=round($v['out']);
fwrite($f, "{$v[in]}\n{$v[out]}\n");
fclose($f);
fwrite($cfg, "

Target[localhost_mac$k]: `cat $fn`
SetEnv[localhost_mac$k]: MRTG_INT_IP=\"$k\" MRTG_INT_DESCR=\"mac $k\"
MaxBytes[localhost_mac$k]: 1250000
Title[localhost_mac$k]: mac$k -- btcgate
PageTop[localhost_mac$k]: <h1>mac$k -- btcgate</h1>
<table>
<tr><td>System:</td> <td>btcgate in \"Cherni vrah 66, Sofia 1407\"</td></tr>
<tr><td>Maintainer:</td> <td>anton@titov.net</td></tr>
<tr><td>Description:</td><td>mac $k</td></tr>
<tr><td>ifType:</td> <td>ethernetCsmacd (6)</td></tr>
<tr><td>ifName:</td> <td></td></tr>
<tr><td>Max Speed:</td> <td>10.0 Mbits/s</td></tr>
</table>

");
}

fclose($cfg);
`/usr/bin/mrtg /etc/mrtg/tr.cfg`;
`/usr/bin/indexmaker --output=/var/www/localhost/htdocs/mrtg/index.html --title="traffic report" --sort=name --enumerate /etc/mrtg/traffic.cfg /etc/mrtg/tr.cfg`;

execute this script once. You should see many MRTG errors. Execute it once more, you should see less errors. Once more and errors should vanish. Setup a cronjob to execute this script every 5 minutes. Point your browser to http://ip.of.the.bridge/mrtg and you should see the graphs.

“ip.of.the.bridge” is actually a tricky part, in my case I have third interface - eth2 which have local 10.x.x.x address and use it to view the traffic. You can also assign ip address to the bridge interface.

After few unsuccesfull google searches about changing linux software raid sync speed (while waiting for idle CentOS to sync 3×200Gb SATA raid1) I decided to run:


ffind /proc/ -name \*raid\*

which lead me to files /proc/sys/dev/raid/speed_limit_max an /proc/sys/dev/raid/speed_limit_min, which are quite self-explanatory and now I’m happily syncing with 40Mb/s.

If you want to debug why your C/C++ application segfaults once in a while (daemon that segfaults once in few days) under Linux the obvious way is to tell your program to create core dump files on segfault. It took me few hours to make my program coredump - by default under Linux setuid programs do not dump core. To force a program to do so, you should execute this after setuid:


prctl(PR_SET_DUMPABLE, 1);

and include:


#include <sys/prctl.h>

Of course you should also enable core dump with setrlimit:


setrlimit(RLIMIT_CORE, &limit);

To tune where your core dumps are stored, search google for core_pattern or /proc/sys/kernel/core_pattern, I don’t feel like copy-pasting.

Hope this helps somebody.

Ever wanted to be able to create site screenshots, just like alexa? Want to do it on your server with a script? Than read this article.

The idea is simple: install xvnc on your server, so you’ll have a virtual X display, than install Opera browser (Firefox does not support -geometry parameter) and finally install ImageMagick (the chances are that you already have it) so you can screenshot your virtual display. On Gentoo Linux I’ve done this by typing:


USE='X qt server png jpeg -kde' emerge vnc opera imagemagick

Create a user, under which xvnc/Opera will run and switch to it. Start vncserver with:


vncserver :1 -depth 24 -geometry 1024x768

it will ask for a password, you will need it if you want to connect to this display from a desktop computer. Of course you can change resolution, which is 1024×768 in this case (do not raise -depth, at least without reading “man vncserver”). Now start opera browser with:


/usr/bin/opera -display :1

Of course you will se nothing. Connect from your computer to the remote vnc server with:

vncviewer xx.xx.xx.xx:1

where xx.xx.xx.xx of course is the IP address of your server. You should see Opera window. Remove all the tollbars, so you have only meny on top. SECURE YOUR BROWSER - you may want to completely disable JavaScript or at least disable all popups. Close the browser. Now is a good time to decide if you want to install Macromedia Flash (installation is trivial).

So now you should be able to make screenshot from the console - run opera with URL as a parameter, also specifying -geometry and screenshot the site with import (part of ImageMagick package). This simple shell script runs Opera, waits 25 seconds for the site to load (unfortunately I didn’t find a good way to check if site is loaded) and screenshots it:


#!/bin/bash

export DISPLAY=":1"
/usr/bin/opera -display :1 -geometry 1024x768+0+0 -nomail -nosession "$1" > /dev/null 2> /dev/null &
/usr/bin/sleep 25
/usr/bin/import -window root -display :1 -crop 1024x768+7+50 "$2"
/usr/bin/killall opera

Usage is:
./screenshot http://www.google.com/ google.png

where screenshot is the name of the script. After executing it you will have google.png with screenshot of google’s website. If you see browser’s menus or frames, play with -crop parameter of import. Do not pass unchecked/unescaped parameters to the script, somebody will be able to execute a program on your server! Also have in mind that running browser on your server is not the most secure thing in the world. Actually it is as secure as browsing with Opera browser, so the choice is yours. Just don’t blame me if something wrong happens.

You can also use this sample Perl webserver. It will site screenshot when called as:
http://xx.xx.xx.xx/shot?http://www.google.com/

The server is single process, so you are limited to a single user at a time, also have in mind that you will have to wait 25+ sec for your screenshot. Before starting it, make sure you have installed: HTTP::Daemon and HTTP::Status. Also have in mind that this is only a TEST server that IS VULNERABLE, but it’s OK for testing and use in trusted environment.


#!/usr/bin/perl

$SIG{PIPE} = 'IGNORE';

use HTTP::Daemon;
use HTTP::Status;

my $d = HTTP::Daemon->new(
'LocalPort' => 8899
) || die;
print "Please contact me at: url, “>\n”;
while (my $c = $d->accept) {
if (my $r = $c->get_request) {
if ($r->method eq ‘GET’ and $r->url->path eq “/shot”) {
$site=urlDecode($r->url->query);
`/path/to/screenshot $site /path/to/writable/area/site.png`;
$c->send_file_response(”/path/to/writable/area/site.png”);
}
else {
$c->send_error(RC_FORBIDDEN)
}
}
$c->close;
undef($c);
}

sub urlDecode {
my ($string) = @_;
$string =~ tr/+/ /;
$string =~ s/%([a-fA-F0-9][a-fA-F0-9])/pack(”C”, hex($1))/eg;

return $string;
}

This webserver should be run like:

nohup perl ws.pl &

as the same user as which vncserver is running (root is not a good choice, really).

Have fun (and screenshots). This guide is intentionally not a total copy-paste solution, if you have hard time with missing steps - better outsource this task to somebody capable.

This works for me, you can click the counter button at the bottom of the page and you will see screenshot of my site on counter’s page (this is a free Bulgarian web counter written by me, only the high performance engine actually).

Today after some backups, generated from my C program generated few files exactly 2Gb in size I’ve remembered that there was some parameter to open for working with large files. I’ve opened man page for open and read that the option is O_LARGEFILE. I also liked the idea of using O_DIRECT. But when I tried to compile my program I got two errors:

hostbackupd.c: In function `somefunction':
somefile.c:102: error: `O_LARGEFILE' undeclared (first use in this function)
somefile.c:102: error: (Each undeclared identifier is reported only once
somefile.c:102: error: for each function it appears in.)

and

hostbackupd.c: In function `somefunction':
somefile.c:102: error: `O_DIRECT' undeclared (first use in this function)
somefile.c:102: error: (Each undeclared identifier is reported only once
somefile.c:102: error: for each function it appears in.)

After some googling I didn’t find something useful and looked at the .h files in /usr/include. I’ve found that header files are cheching if _GNU_SOURCE is defined and if it is, defines O_DIRECT and O_LARGEFILE. So to use this options you shoud compile your program with

gcc -D_GNU_SOURCE .....

or you shoud put

#define _GNU_SOURCE

before all your includes. Have in mind that this may make your program less portable.

And now some words about O_DIRECT. If you plan to use O_DIRECT your buffers should be aligned to pagesize (or sector size, depending on Linux version) and you should read/write in multiples of pagesize. To create buffer aligned at pagesize with size BUFFER_SIZE you can use code like this:


pagesize=getpagesize();
realbuff=malloc(BUFFER_SIZE+pagesize);
alignedbuff=((((int unsigned)realbuff+pagesize-1)/pagesize)*pagesize);

where pagesize is int and realbuff and alignedbuff are pointers. Realbuff is the pointer you should free() when you finished and alignedbuff is the buffer you shoud use with read/write.

You should also write only by multiples of pagesize. So if you want to create file that is not multiple of pagesize you should ftruncate the file when you finished writing.

Few days ago, a server of mine had all of it’s 4 disk crashed. The problem was that one of the disks was overheating and that seemed to help other harddrives to crash (don’t buy Maxtor).

So, if you have a case like mine - some partitions with reiserfs on hard drive with bad blocks this guide may help you.

Firts try to mount you partition and copy as mouch data as you can. But the chances are, that if you are searching the net for a solution that you cannot mount your partiton and reiserfsck dies, saying you have bad blocks.

First download and install dd_rescue (if you plan to boot from LiveCD while restoring, you can try this statically linked binary dd_rescue binary), as dd conv=noerror will not help - it will not write something instead of unreadable zones which changes drive size and most of the data, that is after the bad blocks will not be restored /or you will have the files, but with garbage in them/. dd_rescue also shows you how much of the disk cannot be copied, which may give you some hope to restore you files, as most of the time you will have just few megabytes or less in bad blocks.

You will need another hard disk with enough free space to copy the whole partition (not only for used space). Now boot from a good hard drive or from LiveCD (Gentoo, Knoppix, Ubuntu, should not matter). Mount the good hard drive with free space somewhere.

Now if your failed partiton is /dev/sda1, and /mnt/disk2 is a place with enough free space run

dd_rescue /dev/sda1 /mnt/disk2/sda1-img
or
/path/to/just/downloaded/from/above/link/dd_rescue /dev/sda1 /mnt/disk2/sda1-img

now find some work to do as this will be slow.

When dd_rescue finishes you may want to make another copy of just copied image if you have enough free space, this most probably will be faster:

dd_rescue /mnt/disk2/sda1-img /mnt/disk2/sda1-img2

Now run

reiserfsck –rebuild-tree /mnt/disk2/sda1-img

if after you type Yes program dies silently try:
reiserfsck –rebuild-sb /mnt/disk2/sda1-img

first.

Now you should be able to mount your partition as:
mkdir /mnt/sda1fixed
mount -o loop /mnt/disk2/sda1-img /mnt/sda1fixed

now go to /mnt/sda1fixed and see what you have.

If you are unlucky most of your files will be in /lost+found with strange names. The good news is if you find a directory there, it’s contents will have good names. So you may want to try:
cd /mnt/sda1fixed/lost+foud
find -name some_file_with_not_common_name_that_should_be_in_directory_that_is_important_for_you

with

find -name shadow

you should be able to find your /etc directory.

After trying this, if you-re still missing important data, you can try the above procedure, but with adding -S to reiserfsck parameters:
reiserfsck –rebuild-sb -S /mnt/disk2/sda1-img

but first read reiserfsck man page.

Hope this helped you.

And a piece of advice: backup you data regularly.

There are many forms on the web, that use PHP mail() function to send mail, most of the time to the site owner. The problem with this is that many of them do not verify submitted data and can be used to trick the script to send mail to any recipient with changed body of the mail. The trick is that most of the form give the user ability to write a mail subject and/or his mail, that will be used as “From: ” for sending the mail. The only thing a spammer should do to send spam through such a form is to inject a text, similar to this to the subject field:

Some subject here
Bcc: john@domain1.com, jack@domain2.com

Here goes the spam body, take into account the fact that value that should be submitted as subject should contain one new line character between subject and bcc: and two newline characters between bcc and body.

So the only thing spammer should do is to try to insert text with new line characters to some (or all) of the form fields and your server will start sending spam.

The problem is the spammers know this and currently widely use it. Few days ago I was receiving few hundred mails a day from AOL that mail which originated from my network is marked as spam from user (you can register for free to this service here: http://postmaster.aol.com/tools/fbl.html). Trying to force hundreds users whose forms was abused to validate input data was not an option, so I’ve developed a patch for PHP’s mail() function that checks if subject or to fields have new line characters or if additional headers parameter have two subsequent new line characters and in this case trows a warning and do not send the mail. You can download it here:

http://www.titov.net/php-nospam.patch

It is developed for PHP 4.4.1 and it works for me. For few days already no user (from thousands) reported any negative consequences of this patch so give it a try.

If you’re a PHP developer, feel free to include the code or the idea into the PHP source. I’ll appreciate a credit in the ChangeLog if you do that.

If you need to check for instance if “ns1.titov.net” is a registered nameserver and can be filled as nameserver for a domain you can do it this way:

ask which nameservers are authoritative for zone “net” with:

dig ns net

in the answer section you will have:

net. 172800 IN NS l.gtld-servers.net.
net. 172800 IN NS m.gtld-servers.net.
net. 172800 IN NS a.gtld-servers.net.
net. 172800 IN NS b.gtld-servers.net.
net. 172800 IN NS c.gtld-servers.net.
net. 172800 IN NS d.gtld-servers.net.
net. 172800 IN NS e.gtld-servers.net.
net. 172800 IN NS f.gtld-servers.net.
net. 172800 IN NS g.gtld-servers.net.
net. 172800 IN NS h.gtld-servers.net.
net. 172800 IN NS i.gtld-servers.net.
net. 172800 IN NS j.gtld-servers.net.
net. 172800 IN NS k.gtld-servers.net.

than you should ask all of the nameservers for IP addres of the host in question:
host ns1.titov.net i.gtld-servers.net

if you got a respoce from any one of the servers, than you have a valid nameserver:
Using domain server:
Name: i.gtld-servers.net
Address: 192.43.172.30#53
Aliases:

ns1.titov.net has address 66.230.155.157

You can use this PHP code to accomplish this, I know it’s ugly, as it uses shell commands and regular expressions to parse their output, but I was not feeling like implementing the DNS protocol. Dig and host commands are part of bind package and should be installed on most servers:

function isValidNs($ns){
  $tld='com';
  if (preg_match('#\.([a-z]+)$#', $ns, $m))
    $tld=escapeshellarg($m[1]);
  $ns=escapeshellarg($ns);
  $dig=`/usr/bin/dig ns $tld`;
  preg_match_all(\"#[a-z\.]+\.\s+\d+\s+IN\s+NS\s+([a-z0-9\.\-]+).#\", $dig, $m);
  for ($i=0; $i<count($m[1]); $i++){
    $server=escapeshellarg($m[1][$i]);
    if (preg_match('#has address#', `/usr/bin/host $ns $server`))
      return true;
  }
  return false;
}

Note that you should as all of the nameservers, as for instance Bulgarian nameservers seem to be broken and only one of eight (maybe the master) nameservers responsible for “bg” zone replies to such requests.

Quite often people use queries like:

SELECT quote FROM quotes ORDER BY RAND() LIMIT 1

to get a random row (rows) from a table. That’s quite bad idea. For big tables, if your table have just
50-100 rows, use whatever you want.

What happens when you run such a query? Let’s say you run this query on a table with 10000 rows, than the SQL server generates 10000 random numbers, scans this numbers for the smallest one and gives you this row. Generating random numbers is relatively expensive operation, scaning them for the lowest one (if you have LIMIT 10, it will need to find 10 smallest numbers) is also not so fast (if quote is text it’s slower, if it’s something with fixed size it is faster, maybe because of need to create temporary table).

So. How can you select random row (rows) without this overhead? There is no easy drop in replacement. You can use something like:

SELECT COUNT(*) AS cnt FROM quotes

generate random number between 0 and cnt-1 in your programming language and run the query:

SELECT quote FROM quotes LIMIT $generated_number, 1

Yes. This are two queries, but they are MUCH faster than the first one. This option is good if you need just one random row. If you need more rows, you can still use this trick, just substract X (X is number of rows you need) from cnt when generating random number and modify query to:

SELECT quote FROM quotes LIMIT $generated_number, X

But this will give you X subsequent rows, starting from a random position. If that’s not an option, than you can use another approach: most of the time you have unique numeric field in tables, that start from 1 and continue to grow, so you can do something like:

SELECT MAX(id) AS maxid FROM quotes

generate X random numbers between 1 and X, join this numbers in string with ‘,’ for separator and run this query:

SELECT quote FROM quotes WHERE id IN ($idlist)

Yes. I know, you may have some deleted id’s and the query than may return less rows than you need. But you may generate 10 times more random ids and modify the query to look like:

SELECT quote FROM quotes WHERE id IN ($list_with_10_times_more_ids_than_x) LIMIT $x

now if you do not have too many rows deleted from the table the chances that you will not find let’s say 10 existing ids from list with 100 random numbers are near zero and you can include a code in your program, that will check if you have $x rows in the result and if not (let’s say once in 10000 times) it will run the equivalent ORDER BY RAND() code. You can also generate 100 times more ids than you need - the only overhead you’re introducing is generating 100 times more random numbers than $x (if you need 5 random rows from table with 10000 rows it’s better to generate 500 random numbers than 10000) and parsing of the query will be a bit slower, but the table scan will not be slower - the SQL server will stop searching for rows with corresponding ids from the list as soon
as it finds $x existing rows.

If you have apache server on linux and want to be able to activate front page extensions for virtual hosts but do not want to install mod_frontpage, than you can download this file

http://www.titov.net/downloads/frontpage.tar.gz

extract it to /usr/local with
tar -xzf frontpage.tar.gz -C /usr/local

and than you can install front page extensions like (on a single line):

/usr/local/frontpage/install /www/titov.net www.titov.net somemail@titov.net 2000 titovadmin titovpass

where:
“/www/titov.net” is DocumentRoot of the virtual host
“www.titov.net” is ServerName of the virtual host
“somemail@titov.net” is the ServerAdmin
“2000″ is the uid (user id) of the user in the filesystem (e.g. id of the ftp user of this host from /etc/passwd) all installed files will be chown-ed to 2000:2000, so you shoud have group with the same id/name, you should be able to easyly modify this script to take username as parameter
“titovadmin” and “titovpass” are username and password that you should use for publishing from frontpage

/usr/local/frontpage/uninstall /www/titov.net www.titov.net

will uninstall the frontpage extensions from this virtualhost.

You should have Options +ExecCGI in the virtualhost and most probably you will want to have suexec.

Real life example:
SELECT username FROM users WHERE (ADDDATE( last_in_chat, INTERVAL 2 MINUTE) > NOW()) ORDER BY username LIMIT 30

NEVER do that. The user in question runs this query on table with 3000+ rows, which means, that SQL server (MySQL in
this case) is doing 3000+ date/time additions and will never use any index. If you write the query this way:

SELECT username FROM users WHERE last_in_chat> NOW()-interval 2 minute ORDER BY username LIMIT 30

the server will do only one date/time substraction and will probably use index if there is such on last_in_chat. The second query runs much faster, even without index. In this case not having index on last_in_chat maybe is a better option as this field probably updates on user actions in chat room, which may happen more ofthen than the displaing of this list.

Mind the Query cache.

MySQL have a nice feature - query cache, which stores query results and if you rerun your query, you will have your result served from the cache, which is quite faster. The cache expires at the first update/insert/delete query on the table (or any of the tables in case of join), so you will not get outdated results. The downside is that when you use functions like NOW(), CURRENT_DATE(), UNIXTIME() and so on, the server will not put your query in the cache, as next time the query will be executed this functions may return different value.

You can rewrite the query from the example above to use the query cache:
SELECT username FROM users WHERE last_in_chat>'2005-09-15 16:44:00' ORDER BY username LIMIT 30

where “2005-09-15 16:44:00″ is computed in the programming language, from which you run query like this (PHP):

$time=date("Y-m-d H:i:00", time()-120);

As you see, the seconds are hardcoded to be 00. This sacrifices correctness of the query and now you will see users which was last in chat 2 to 3 minutes. You shoud decide if you care about this. In this case this may be not a good option, but if you want users, that was active in last 15 minutes, you will most probably will not care about 1 minute.

so when you run this query, it will be stored in the cache and the cached result will be used during this minute as all the queries will be the same and will not have one of “bad” functions in it. On the next minute, query will be:

SELECT username FROM users WHERE last_in_chat>'2005-09-15 16:45:00' ORDER BY username LIMIT 30

and this will generate new results.

Of course you shoud use this trick only if you think that this query will be run more then once per minute, but if not you maybe don’t need any performance tips.

This trick is better illustrated when you substitute CURRENT_DATE() with current date, generated from your program as this changes quite less often than NOW() (once in 24 hours actually) and you can have a query served from the cache for a whole day.