nginx + HAProxy + Thin + FastCGI + PHP5 = Load Balanced Rails with PHP Support

This was probably one of the more radical switches in architecture that we’ve made in the recent past.  For the past 7 months we have been successfully running Apache + mod_proxy + mongrel with some limited PHP applications bolted on but the whole set up felt a tad bloated and a little more than unstable as we tested various scaling scenarios.  With the rails community chatting about the hotness that is thin, nginx, and HAProxy we decided to see what it would take to migrate.

The catch with our infrastructure though is that we have broken apart our static assets from rails so the usual localhost simplicity isn’t there which, unfortunately, is how most of the tutorials are aimed at.  In our case, the application sits in a pool of servers and one of the things that we wanted to do was leverage HAProxy to balance each nginx instance over a group of primary and secondary application servers with the primary and secondary status staggered between each nginx instance. Igvita’s post was the inspiration for this and our goal is to create a more fault tolerant environment built on shared services rather than our current setup of largely discrete stacks.

The first thing I tackled was setting up nginx by breaking apart the rails application and any PHP applications into separate virtual hosts. First up is the rails config…

upstream thin {
server 127.0.0.1:8700;
}

server {
listen       80;
server_name  first.server.name;
rewrite ^/(.*) https://what.ever.you.want/$1 permanent;
}

server {
listen 443;
ssl on;
ssl_session_timeout  5m;
ssl_protocols  SSLv2 SSLv3 TLSv1;
ssl_ciphers  ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers   on;

# path to your certificate
# if you have an intermediate cert then you need to add the contents to the end of the cert file
ssl_certificate /where/your/cert/is.pem;

# path to your ssl key
ssl_certificate_key /where/your/key/is.key;

# standard rails configuration goes here.
root /location/of/your/site/root;

#        rewrite_log on;

if (-f $document_root/system/maintenance.html) {
rewrite  ^(.*)$  /system/maintenance.html last;
break;
}

location ~ ^/$ {
if (-f /index.html){
rewrite (.*) /index.html last;
}
proxy_pass  http://thin;
}

location / {
if (!-f $request_filename.html) {
proxy_pass  http://thin;
}
rewrite (.*) $1.html last;
}

location ~ .html {
root /location/of/your/site/root;
}

location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|pdf|txt|js|mov)$ {
root  /location/of/your/site/root;
}

location / {
proxy_pass  http://thin;
proxy_redirect     off;
proxy_set_header   Host             $host;
proxy_set_header   X-Real-IP        $remote_addr;
proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
proxy_set_header X-FORWARDED_PROTO https;
}
}

And our PHP config…

server {
### PHP Support ###
listen       80;
server_name  second.server.name;
access_log  /location/of/your/site/root/logs/blog-access.log;
error_log  /location/of/your/site/root/logs/blog-error.log;

if (!-e $request_filename) {
rewrite ^([_0-9a-zA-Z-]+)?(/wp-.*) $2 last;
rewrite ^([_0-9a-zA-Z-]+)?(/.*\.php)$ $2 last;
rewrite ^ /index.php last;
}

location / {
root / /location/of/your/site/root;
index index.html index.php index.htm;
}

location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|pdf|txt|js|mov)$ {
root /location/of/your/site/root;
}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000

location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_FILENAME  /location/of/your/site/root/$fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
}
}

Next up is the HAProxy configuration…

global
	log 127.0.0.1	local0
	log 127.0.0.1	local1 notice
	nbproc		1
	pidfile		/var/run/haproxy.pid
	#debug
	#quiet
	user haproxy
	group haproxy

defaults
	log		global
	mode		http
	option		httplog
	option		dontlognull
	retries		15
	redispatch
	contimeout	60000
	clitimeout	150000
	srvtimeout	60000
	option          httpclose     # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode)
	option          abortonclose  # enable early dropping of aborted requests from pending queue
	option          httpchk       # enable HTTP protocol to check on servers health

listen	thin *:8700
	option httpchk
        mode http
        option forwardfor except 127.0.0.1/8
	balance roundrobin
        server web01 hostname-of-server:8100 weight 1 minconn 1 maxconn 6 check inter 40000
        etc....

There are a couple of things to note here: to get HAProxy to fetch content from servers other than localhost you’ll need to chuck in a wildcard: listen thin *:8700, and to get logging running you’ll need to edit /etc/syslog.conf adding the following lines:

# Save HA-Proxy logs
	local0.*                                                /var/log/haproxy_0.log
	local1.*                                                /var/log/haproxy_1.log

As well as edit /etc/default/syslogd:

# For remote UDP logging use SYSLOGD="-r"
SYSLOGD="-r"

One last thing that drove me almost to the brink of madness is that HAProxy, at least in the build on Ubuntu 8.04, is finicky about how the configuration file is laid out. Each section default, global, and listen has to have the parameters defined with a tab preceding each and while HAProxy would start and accept request from nginx with anything else it would not fetch from the thin server pool.

So that is our front-end, what about the application pool? Turns out that Thin is just as easy to set up as a mongrel cluster and only took a minimum of effort on our part to get it dialed in with God and serving upstream. We edited the stock init script to reflect where we store the yamls and massaged God for the changes in clustering.

Here’s our init script:

#!/bin/sh
### BEGIN INIT INFO
# Provides:          thin
# Required-Start:    $local_fs $remote_fs
# Required-Stop:     $local_fs $remote_fs
# Default-Start:     2 3 4 5
# Default-Stop:      S 0 1 6
# Short-Description: thin initscript
# Description:       thin
### END INIT INFO

# Original author: Forrest Robertson

# Do NOT "set -e"

DAEMON=/usr/bin/thin
SCRIPT_NAME=/etc/init.d/thin
CONFIG_PATH=/location/of/your/yamls

# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0

case "$1" in
  start)
	$DAEMON start --all $CONFIG_PATH
	;;
  stop)
	$DAEMON stop --all $CONFIG_PATH
	;;
  restart)
	$DAEMON restart --all $CONFIG_PATH
	;;
  *)
	echo "Usage: $SCRIPT_NAME {start|stop|restart}" >&2
	exit 3
	;;
esac

:

And here’s a sample yaml:

---
user: user-which-runs
group: group-which-runs
chdir: /location/of/your/app
log: log/thin.log
port: 8100
environment: staging
pid: /location/of/your/pids.pid
servers: 3

God is very similar to what we had been running with a mongrel cluster:

RAILS_ROOT = "/location/of/your/app"

%w{8100 8101 8102}.each do |port|
 God.watch do |w|
    w.group = 'pack_01'
    w.name = "thin-#{port}"
    w.interval = 30.seconds # default
    w.start = "thin start -C /location/of/your.yaml -o #{port}"
    w.stop = "thin stop -C /location/of/your.yaml -o #{port}"
    w.restart = "thin stop -C/location/of/your.yaml -o #{port} && thin start -C /location/of/your.yaml -o #{port}"
    w.start_grace = 15.seconds
    w.restart_grace = 15.seconds
    w.pid_file = "/location/of/your/pids.#{port}.pid"

    w.behavior(:clean_pid_file)

    w.start_if do |start|
      start.condition(:process_running) do |c|
        c.interval = 5.seconds
        c.running = false
      end
    end

    w.restart_if do |restart|
      restart.condition(:memory_usage) do |c|
        c.above = 150.megabytes
        c.times = [3, 5] # 3 out of 5 intervals
      end

      restart.condition(:cpu_usage) do |c|
        c.above = 50.percent
        c.times = 5
      end
    end

    # lifecycle
    w.lifecycle do |on|
      on.condition(:flapping) do |c|
        c.to_state = [:start, :restart]
        c.times = 5
        c.within = 5.minute
        c.transition = :unmonitored
        c.retry_in = 10.minutes
        c.retry_times = 5
        c.retry_within = 2.hours
      end
    end
  end
end

There you have it, a completely rebuilt stack leveraging lean, fast, and stable services.

Gratefully cribbed from HowtoForgeJohn Yerhot, and  Igvita.

EC2: Pound + Apache, Mongrel Cluster, and MySQL Cluster

Alternately, I should be titling this my 36 hour nightmare. Last week, high off the presentation, I built out and deployed the following configuration.

EC2 Cluster

Everything was nice and tight and after loading QA data it ran like a champ but the problem was that QA data was pretty thin being only a fraction of the size of the production data. When we loaded production data into it, which by the way took nearly an hour to import,performance in the Cluster ground to a halt and we were faced with MySQL timing out the mongrels. Needless to say that after another 36 hours of work we abandoned this model and are looking at plain old replication for our data backed.

What could have given us all that grief? A couple of things spring to mind. The instances have 1.7GB of RAM and a single core process which for now works like a champ for a single MySQL server but for whatever reason it is not enough for a cluster under load. Also, running both SQL and Data Node services on the same box was likely less than inspired as the SQL service would spin up chewing into the remaining RAM and would often dominate the CPU. However, when we launch the cluster we were running some grossly inefficient queries with little or no indexing in the tables. A huge issue.

So we pulled back. At the moment we are still running the three legged system (one instance running Pound, Apache, Monit, and Mongrels, one Harvester, and one MySQL instance) but we made significant changes to the DB so that all the bloated joins that Ruby likes to make are hitting indexed tables as well as tweaking my.cnf to boost key buffer to 30% of RAM. Things seem better and we bought ourselves a little breathing room but we are still hitting the limit of the number of mongrels we can run on a single instance, 10 seems to be the upper threshold for stability, so we need to work out a method for building out a replicated set that will auto-recover after the countless data migrations that the dev team performs.  That will be fun!

Sendmail, Ubuntu, Yahoo, and You!

One of the main benefits of sympathetic pregnancy insomnia is that I am able to get a jump on all these little projects that I had been meaning to take care of but never had the motivation. Configuring sendmail to use Yahoo Mail as a relay to handle outbound mail from Apache is one of those projects I’ve put off for far to long. Thanks to two HowTos I managed to knock this out in a couple of minutes.

sudo apt-get install sendmail

sudo /etc/init.d/sendmail stop

sudo nano /etc/mail/authinfo and add the following:

AuthInfo:yahoo.com “U:babydaddy@your_ATT_Domain” “I:babydaddy@your_ATT_Domain” “P:password_here” “M:PLAIN”
AuthInfo: “U:babydaddy@your_ATT_Domain” “I:babydaddy@your_ATT_Domain” “P:password_here” “M:PLAIN”

sudo chmod 660 /etc/mail/authinfo to lock it down.
sudo makemap hash /etc/mail/authinfo < /etc/mail/authinfo to make the map file.
sudo nano /etc/mail/sendmail.mc and look for or add the following lines:

define(`confAUTH_OPTIONS’, `A’)dnl
define(`confAUTH_MECHANISMS’, `LOGIN PLAIN DIGEST-MD5 CRAM-MD5′)dnl
TRUST_AUTH_MECH(`LOGIN PLAIN DIGEST-MD5 CRAM-MD5′)dnl
FEATURE(`authinfo’,`hash -o /etc/mail/authinfo.db’)dnl
define(`SMART_HOST’, `esmtp:[smtp.sbcglobal.yahoo.com]‘)dnl

sudo cp /etc/mail/sendmail.cf /etc/mail/sendmail.cf.bak because you can never to too sure.

sudo make /etc/mail/sendmail.cf -C /etc/mail to rebuild your sendmail.cf.
sudo /etc/init.d/sendmail start

To test your set up create a text file that at least includes a To and a Subject:To:ServerMonkey@foo
Subject:Linux Rules Windows Drools

sudo sendmail -Am -v -t < your_text_file to shoot a copy out to yourself.

You should see something like this:

babydaddy@your_ATT_Domain… Connecting to smtp.sbc.mail.yahoo4.akadns.net. via esmtp…
220 smtp110.sbc.mail.re2.yahoo.com ESMTP
>>> EHLO your_domain
250-smtp110.sbc.mail.re2.yahoo.com
250-AUTH LOGIN PLAIN XYMCOOKIE
250-PIPELINING
250 8BITMIME
>>> AUTH PLAIN xxxxxxxxxxxxxxxxxxxxxxxx
235 ok, go ahead (#2.0.0)
>>> MAIL From: AUTH=root@your_domain
250 ok
>>> RCPT To:
>>> DATA
250 ok
354 go ahead
>>> .
250 ok 1166057377 qp 49511
babydaddy@your_ATT_Domain… Sent (ok 1166057377 qp 49511)
Closing connection to smtp.sbc.mail.yahoo4.akadns.net.
>>> QUIT
221 smtp110.sbc.mail.re2.yahoo.com

sudo nano /etc/php5/apache2/php.ini so that it knows where sendmail hangs out.

Find the line with sendmail_path, delete the ; and edit it to read:

sendmail_path = /usr/sbin/sendmail -i -t

sudo /etc/init.d/apache2 reload to make the changes stick.

Sendmail is now configured to use Yahoo as a relay! Now, ideally you should be able to massage this to use any external SMTP server that allows plain text authentication.

Gratefully cribbed from here and here.