Apache and nginx on same server with multiple IP addresses

In order to get this to work, you need to specify the IP and port each should listen in the domain configuration file (and NOT in the global web-server config).

nginx configuration for a domain (/etc/nginx/sites-enabled/www.mydomain.com) contains:

listen 11.22.33.44:80;
server_name http://www.mydomain.com mydomain.com;

Apache configuration for another domain contains:

In this case, this was at the top of /etc/apache2/sites-enabled/myotherdomain.com

WIth this setup, each web server has its own IP to listen to and they both serve on port 80.

nginx config file which works great with Codeigniter 2.0

This config file works great on Ubuntu 10.04 LTS server with nginx 0.7.65. Other software includes php 5.3.2, Code Igniter 2.0. Performance-wise I was able to squeeze 12,000 requests per second on static files and ~250 req/s on dynamic PHP pages.

This is a copy-paste from Chris Gaunt’s github page with a change in server name.

server {
    listen 8080;
    server_name www.metak.com metak.com;
    access_log /home/metak/metak.com/logs/access.log;
    error_log /home/metak/metak.com/logs/error.log;
    root   /home/metak/metak.com/public_html;

    # If file is an asset, set expires and break
    location ~* \.(ico|css|js|gif|jpe?g|png)(\?[0-9]+)?$ {
        expires max;
        break;
    }

	# Serve the directory/file if it exists, else pass to CodeIgniter front controller
	location / {
		try_files $uri @codeigniter;
	}

	# Do not allow direct access to the CodeIgniter front controller
	location ~* ^/index.php {
		rewrite ^/index.php/?(.*)$ /$1 permanent;
	}

	# CodeIgniter Front Controller
	location @codeigniter {
		internal;
		root /home/metak/metak.com/public_html;
		fastcgi_pass 127.0.0.1:9000;
		fastcgi_index index.php;
		include fastcgi_config;
		include fastcgi_params;
		fastcgi_param SCRIPT_FILENAME /home/metak/metak.com/public_html/index.php;
	}

	# If directly accessing a PHP file in the public dir other than index.php
	location ~* \.php$ {
		root /home/metak/metak.com/public_html;
		try_files $uri @codeigniter;
		fastcgi_pass 127.0.0.1:9000;
		fastcgi_index index.php;
		include fastcgi_config;
		include fastcgi_params;
		fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
	}

}

Compare two directories

Great command for recursively comparing directories on Linux or Mac.

diff -rq dirA dirB

It sorts all files by name and than reports if a file exists in one and not the other. It also reports if files of same name exist in both directories but differ in content.

Subversion pre-commit hook for detection of byte-order marks (BOMs)

Byte-order marks can mess up your code badly. Some of my CodeIgniter PHP code was receiving “headers already sent”, thanks to BOM’s alone. So, our goal is to reject a commit that contains one or more PHP files with BOMs. You can easily change the script to filter other files as well. The script is known to work on Dreamhost.

Enough of talking, here is the pre-commit hook, a bash script, you were desperately searching the Internet for:

#!/bin/bash

REPOS="$1"
TXN="$2"

PHP="/usr/local/bin/php"
SVNLOOK="/usr/bin/svnlook"
AWK="/usr/bin/awk"
GREP="/bin/egrep"
SED="/bin/sed"

CHANGED=`$SVNLOOK changed -t "$TXN" "$REPOS" | $GREP "^[U|A]" | $AWK '{print $2}' | $GREP \.php$`

REGEX=$'\xEF\xBB\xBF'
GREP2="grep -l $REGEX"

for FILE in $CHANGED
do
    MESSAGE=`$SVNLOOK cat -t "$TXN" "$REPOS" "$FILE" | $GREP2`
    if [ $? -eq 0 ]
    then
        echo 1>&2
        echo "***********************************" 1>&2
        echo "Byte order mark error in: $FILE:" 1>&2
        #echo `echo "$MESSAGE" | $SED "s| -| $FILE|g"` 1>&2
        echo "***********************************" 1>&2
        exit 1
    fi
done

How to fix /tmp 100% usage problems

This was happening a lot on my main server (RHEL Linux with cPanel). Ever since I chose a cPanel installation, the /tmp directory was often spiking to 100%. The mistery was that I couldn’t list the files to see the cause of those spikes in /tmp usage. Regular ‘ls’ command was showing files, but nothing alarmingly big, and nowhere close to 1G that I had allocated to /tmp.

I first changed a temporary directory for my Web applications. Instead of /tmp I used /tmp-sc, and fixed my applications accordingly. When /tmp goes to 100% at least my applications were able to use a disk cache.

Next, I decided to have MySQL use /tmp-mysql. Created a new directory with proper permissions and added a tmpdir option to mysqld section of /etc/my.cnf

[mysqld]
tmpdir = /tmp-mysql

I restarted MySQL and voila, all my troubles were gone.

The server with 2GB of ram and 2 cores now serves over 1 million PHP pages + many many more static images and the best part is that the load on the server rarely goes above 1.

If you find this tip helpful, let me know.

Vim syntax coloring on CentOS

CentOS by default comes without syntax coloring for Vim. You have to install vim-enhanced with this command:

 yum install vim-enhanced

Then, if you’re lazy, open your ~/.bashrc and at the bottom add:

alias vi=vim

Restart your shell!

Processing large files in PHP

I’ve been using my own PHP web statistics script for over a year now. I realized that some dates were missing in reports. It turns out PHP has a limit of 2GB or so when fopen-ing files, regardless of the fact that the script is reading it line by line and not storing any lines in memory.

The solution is to use Linux split command to break the file in manageable pieces and process them one by one. Don’t go crazy and try to split it in 2GB pieces, unless you have abundant RAM. If you’re splitting it in 2GB files, the process will use 2GB of RAM while doing it. Ouch!!!

Since, I’m working with 1GB RAM total, I decided to go with 100MB files, hence using 100MB of RAM in doing so. Also, I wanted my files to have a prefix zzz_split_ (instead of a default x). “zzz” just lists nice at the end of all files in a directory.

split -C 100m access_log.old zzz_split_

This command split my apache access_log file into 30 pieces, 100 MB each, making sure that lines are not broken.

I fixed my PHP to glob the files in a directory.

$logfiles = '/home/admin/webstats/zzz_split_*';
foreach(glob($logfiles) as $logfile) {
$logfile = $logfile[0];
$handle = fopen($logfile,'r') or die("Can't open the log file");
...
}

Here’s a (wo)man page for split

NAME
split – split a file into pieces

SYNOPSIS
split [OPTION] [INPUT [PREFIX]]

DESCRIPTION
Output fixed-size pieces of INPUT to PREFIXaa, PREFIXab, …; default
PREFIX is ‘x’. With no INPUT, or when INPUT is -, read standard input.

Mandatory arguments to long options are mandatory for short options
too.

-a, –suffix-length=N
use suffixes of length N (default 2)

-b, –bytes=SIZE
put SIZE bytes per output file

-C, –line-bytes=SIZE
put at most SIZE bytes of lines per output file

-l, –lines=NUMBER
put NUMBER lines per output file

–verbose
print a diagnostic to standard error just before each output
file is opened

–help display this help and exit

–version
output version information and exit

SIZE may have a multiplier suffix: b for 512, k for 1K, m for 1 Meg.