Tag Archives: Erlang

Solving performance problems in pub-sub erlang server

Intro
So I wrote (see some prehistory here) a kind of notification server where clients can subscribe to events and be notified when event of interest happens. Clients use HTTP long poll method to get notifications delivered, and one of the application of the server is chat room.

Symptoms
In my case there were two problems:

  1. general lack of performance (I started optimizing somewhere from 200-300 messages per second)
  2. under high load server would lock up, sometimes for extended period of time – it does not deliver any messages
  3. even under moderate load performance is not stable and would drop eventually for some period of time (sometimes to complete lock up, sometimes not, sometimes for a couple of seconds, sometimes for longer time).
  4. In-depth investigation with tcpdump has shown that server can not even accept connections.

Side note on server not accepting tcp connections
That is interesting how it happens though. I don’t know if it is specific Linux 2.6 behavior of it is a norm, but the sequence is:

  1. client sends SYN
  2. server responds with ACK, but acknowledgment number set to some arbitrary big number
  3. client drops connection by sending RST

So should you see similar behavior note that this is just a symptom, not disease. The problem is somewhere dipper. In my case increasing TCP listen backlog from mochiweb’s default 30 helped a bit, fewer timeouts were observed, but still performance sucked.

Fixing Root Cause
So how pub-sub servers in Erlang are generally built? You get a message being processed in some process, it might have been received from somewhere else or originates from this process . And then you have bunch of waiting processes representing connected subscribed clients waiting for a message be delivered. Each of these processes normally represents TCP connection or in my case HTTP long polling connection. And delivering message to this process releases it from wait state and allows message to be delivered to end user. There is of course some router module or process which determines a subset of processes (PIDs) to which the message should be delivered. How to do efficiently is very interesting topic but not for this post. Then you do something like
lists:foreach(fun(Pid) -> send_message(Pid, Message) end, PidList)

The result of this is that each of processes from the target group (selected by router) becomes immediately available for execution. And if the group size is big, chances are that current process broadcasting this notifications will be preempted. And the thing is that this actually causes context switching storm. I’m not 100% sure how Erlang runtime is implemented, but it seems that if process receives a message it gets some kind of priority and is scheduled for execution, like it happens in some OSes. So the message sending loop may take quite awhile.

Now, if the message broadcast loop is more complex, say it consists of two nested loops, and inside it you do some non trivial operations followed by sending the message for one particular PID, then the things become very bad. Context switching overhead no matter how light it is in Erlang kills your performance.

Recipe

  1. If you have complex loops to calculate to which PID send the message, and you do message sending inside that loop – rewrite the code. First prepare list of PIDs using whatever complex procedures you have, then send messages to those PIDs in one shot.
  2. Then sending messages to bunch of PIDs do temporarily boost performance of your thread so it won’t be preempted.

Example:

PidList = generate_pid_list(WhatEver),
OldPri = process_flag(priority, high), % Raise priority, save old one
lists:foreach(fun({Pid, Msg}) -> send_message(Pid, Msg) end, PidList),
process_flag(priority, OldPri)


That’s basically it. The result is that now I’m reliably achieving about 1.5k messages/sec with all stuff like HTTP/JSON parsing, ETS operations, logging, etc. I would like to pump this number few times higher, but at the moment that’s what I can get. I will come back when learn something new 🙂

PS. You may also find this discussion followed $1000 code challenge useful:
http://groups.google.com/group/erlang-programming/browse_thread/thread/1931368998000836/b325e869a3eea26a

Get Unix timestamp in Java, Python, Ruby, Erlang, JavaScript, Go, MySQL

Mostly a note for myself. To Get Unix timestamp value in seconds

Java:

long timestamp = System.currentTimeMillis()/1000

Python:

import time
timestamp = int(time.time())

JavaScript:

var ts = Math.floor(Date.now()/1000); // You can also use new Date().getTime()/1000 but this one is faster

Ruby:

require 'time'
ts = Time.now.to_f

Go:

package main
import (
 "time"
 "fmt"
)

func main() {
 // Get and print integer timestamp
 timestamp := time.Now().Unix()
 fmt.Printf("unixtime: %d\n", timestamp)
 // get and print floating point timestamp with sub-second precision
 timestampFloat := float64(time.Now().UnixNano()) / 1.0e9
 fmt.Printf("unixtime: %f\n", timestampFloat)
//Convert integer timestamp back to Time:
 t1 := time.Unix(timestamp, 0)
 fmt.Printf("Time: %v\n", t1)</pre>
//Convert floating point timestamp back to Time
 t2 := time.Unix(int64(timestampFloat), int64((timestampFloat - float64(int64(timestampFloat)))*1e9))
 fmt.Printf("Time: %v\n", t2)
 }

Erlang:

{Mega, Secs, _} = now(),
Timestamp = Mega*1000000 + Secs,

Working with timestamps in MySQL


mysql> SELECT UNIX_TIMESTAMP('1997-10-04 22:23:00');
-> 875996580
mysql> SELECT FROM_UNIXTIME(1111885200);
+---------------------------+
| FROM_UNIXTIME(1111885200) |
+---------------------------+
| 2005-03-27 03:00:00 |
+---------------------------+

Solution for Erlide hangs Eclipse problem

If you are using Erlide to develop Erlang applications you may sooner or later find yourself in a situation than IDE just hangs completely whenever you try to access any Erlide functions. You can not even go to Erlide section of Eclipse Preferences ! This seems to be a kind of mystery, at least googling didn’t reveal any helpful information except hinting that problem could be due to backend connection issues. As far as I understand, backend is erlang compiler/toolchain used by Erlide. I can imagine it is used when compiling, or when doing syntax check, but why it hangs in editor and preferences dialog?!

I was able to find Erlide logfile (~/Document/workspace_erlide.log under MacOS) And it suggested that Erlide was running in dead loop trying to invoke external Erlang. In particualr it complained about that the following command were failing with result code 126:
/opt/local/var/macports/software/erlang/R12B-3_2/opt/local/lib/erlang//bin/erl -name f0a30ba_erlide@alexey-timanovskys-macbook-pro.local -setcookie erlide
. When I tried to execute it from command line I got errors from erlang. Then I discovered that I have two installations of erlang – R12B 3.2 and 5.0, obviously I used macports to upgrade to most recent one, but it kept previous version too. And erlide was configured to use old erlang setup which didn’t work for whatever reason. So the next step was to find where it is configured, I grepped and found it in
~Documents/workspace/.metadata/.plugins/org.eclipse.core.runtime/.settings/org.erlide.core.prefs
I replaced line

runtimes/erlang/homeDir=/opt/local/var/macports/software/erlang/R12B-3_2/opt/local/lib/erlang/
with

runtimes/erlang/homeDir=/opt/local/lib/erlang/

and restarted Eclipse. It worked!

So be careful, breaking anything in erlang configuration will not allow you to use Erlide, you can’t even go to preferences and change erlang home directory to correct one.

I’m using Eclipse 3.4.1 and tried Erlide versions 0.5.0 and 0.5.1.

Correct implementation of fast server logging in Erlang

Making long story short: just use disk_log.

Actual history with concrete figures (I hate abstract discussions):
I first tried to implement logging as a separate singleton process (gen_server). Formatting was heavily optimized but performed in server thread, the idea was to offload main service threads as much as possible, as I was working on latency-critical application. Output was plain text using raw file output. The best performance I could get was about 300 events/sec (on MacBook pro). And I came to infamous problem that due to scheduling implementation particularities in Erlang, mailbox of logger thread was continuously growing eventually leading to hard crash of Erlang VM (with malloc exception).

I moved all formatting work to calling threads. I used buffered writing at Erlang driver level (delayed_write) and at logger process level (accumulating 1000 events before writing to disk). It worked slightly better – about 1000 events/sec but eventually got to the same problem of malloc out of memory exception due to mailbox memory overflow.

So I decided to give disk_log a try. I used internal format (binary), I also changed to semi-synchronous model (using disk_log:log and disk_log:log_terms) so that multiple processes can be served by log module, but each process have to wait till it’s message is processed. The results are great : about 100K events/sec and memory overflow is inherently impossible. There are of course implications on parsing this logs but they absolutely worth the performance gain it gives.

Profiling a Running Erlang Server

Say you want to profile a running (may be even production) Erlang server. You would do it with fprof. It is relatively easy to profile a function, just follow documentation, let’s see how to profile application.

  1. Start profiling for all processes of interest
    fprof:trace([start, {procs, [whereis(pid1), whereis(pid2)]}]).
    pid1, pid2 etc – registered processes. Fprof will profile them and all spawned processes, so depending on your architecture it is enough to include single process which listens on socket and accepts connections. Documentation states that whereis is not necessary, but it doesn’t work for me otherwise.
  2. After a while stop profiling. Note that trace files are really big, and processing them in consequent steps takes quite awhile, so the the first time you wouldn’t want to run profiling for the whole day 🙂 Just try 30-60 seconds to begin with. Also keep in mind that load will increase 5-10 times, so if you test in on production server, make sure you have enough resources 🙂
    fprof:trace([stop]).
  3. Process data. This will process raw data and save result to ‘fprof.trace’ file, or you can give it other name so you can find and load it later.
    fprof:profile().
  4. Analyse and save data to human readable text file – ‘fprof.analysis’
    fprof:analyse([totals, {dest, "fprof.analysis"}]).
  5. Clear all memory. This makes sense if you want to let server continue.
    fprof:stop().

I will not explain how to read resulting text file, they are a bit cryptic and you have to read fprof documentation, frankly speaking I don’t understand it fully 🙂

Toward a million-user long-poll HTTP application – nginx + erlang + mochiweb :)

Intro
First, this post and title are inspired by the following great article, you absolutely must read if you are interested in the subject:
http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-1/
http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-2/
http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3/

I’m quite far from 1M users, but still getting a load which out-of-the-box configuration can not cope with. What I’m doing is long-poll HTTP server which implements chat and some other real-time notifications for web clients. Nginx is used as a reverse proxy (HTTP router basically). Let’s say we want to handle N connections simultaneously (N should be much bigger than 1000 🙂 ). What parameters need to be changed? Note: all numbers are approximate.

1. Nginx
Number of file handlers (descriptors) provided by OS. It is configured in the script which launches nginx. In my case it is /etc/init.d/nginx. So I just add

ulimit -n N*2

Mind that proxy naturally uses two sockets per connection (uplink and downlink).

Now let’s take a look at Nginx config file (/etc/nginx/nginx.conf for me). Here is extract from Nginx manual on events module:

The worker_connections and worker_proceses from the main section allows you to calculate maxclients value:
max_clients = worker_processes * worker_connections
In a reverse proxy situation, max_clients becomes
max_clients = worker_processes * worker_connections/4

So we have to set

events {
worker_connections N*4;
}

Then as we use Nginx as a proxy not for a regular HTTP server, but for long-polling one, we have to tune Nginx parameters:

location /my_url {
proxy_pass http://127.0.0.1:8000;
proxy_buffering off;
proxy_read_timeout 3600;
}

So we forward all requests to “/my_url” to HTTP server running on localhost port 8000. We disable buffering in Nginx, and tell that it may take a while for the server to respond, so Nginx should wait and not timeout.

ok, we are done with Nginx, let’s go to the server

2. Erlang
Again in start script we set number of available file descriptors:

ulimit -n N

Then we are passing two additional params to erlang:

erl +K true +P N ...your params here

+K option enables kernel polling, which saves some CPU cycles when handling multitude of connections. +P says how many parallel processes Eralng VM can have, by default it is 32768. My application is based on mochiweb library and uses one process per connection (usual case for Erlang server applications), so we need to have at least N processes. Note that some recommend to set it to maximum possible value, roughly 137M, but It leads to that Erlang VM allocates about 1GB of memory, not a big deal per se as it is in virtual address space, but I can imagine what internal reference tables for this memory heap and erlang process tables/mailboxes/whatever are also big and can cause some overhead. So I’d prefer to be on a safer side and set it to the really required value.

3. Application
The last thing we want is tell mochiweb server that we want more than default 2048 simultaneous connections. So I changing start parameters:

mochiweb_http:start([{max, N },{name, ?MODULE}, {loop, Loop} | Options]).