Tag Archives: pub-sub

Solving performance problems in pub-sub erlang server

So I wrote (see some prehistory here) a kind of notification server where clients can subscribe to events and be notified when event of interest happens. Clients use HTTP long poll method to get notifications delivered, and one of the application of the server is chat room.

In my case there were two problems:

  1. general lack of performance (I started optimizing somewhere from 200-300 messages per second)
  2. under high load server would lock up, sometimes for extended period of time – it does not deliver any messages
  3. even under moderate load performance is not stable and would drop eventually for some period of time (sometimes to complete lock up, sometimes not, sometimes for a couple of seconds, sometimes for longer time).
  4. In-depth investigation with tcpdump has shown that server can not even accept connections.

Side note on server not accepting tcp connections
That is interesting how it happens though. I don’t know if it is specific Linux 2.6 behavior of it is a norm, but the sequence is:

  1. client sends SYN
  2. server responds with ACK, but acknowledgment number set to some arbitrary big number
  3. client drops connection by sending RST

So should you see similar behavior note that this is just a symptom, not disease. The problem is somewhere dipper. In my case increasing TCP listen backlog from mochiweb’s default 30 helped a bit, fewer timeouts were observed, but still performance sucked.

Fixing Root Cause
So how pub-sub servers in Erlang are generally built? You get a message being processed in some process, it might have been received from somewhere else or originates from this process . And then you have bunch of waiting processes representing connected subscribed clients waiting for a message be delivered. Each of these processes normally represents TCP connection or in my case HTTP long polling connection. And delivering message to this process releases it from wait state and allows message to be delivered to end user. There is of course some router module or process which determines a subset of processes (PIDs) to which the message should be delivered. How to do efficiently is very interesting topic but not for this post. Then you do something like
lists:foreach(fun(Pid) -> send_message(Pid, Message) end, PidList)

The result of this is that each of processes from the target group (selected by router) becomes immediately available for execution. And if the group size is big, chances are that current process broadcasting this notifications will be preempted. And the thing is that this actually causes context switching storm. I’m not 100% sure how Erlang runtime is implemented, but it seems that if process receives a message it gets some kind of priority and is scheduled for execution, like it happens in some OSes. So the message sending loop may take quite awhile.

Now, if the message broadcast loop is more complex, say it consists of two nested loops, and inside it you do some non trivial operations followed by sending the message for one particular PID, then the things become very bad. Context switching overhead no matter how light it is in Erlang kills your performance.


  1. If you have complex loops to calculate to which PID send the message, and you do message sending inside that loop – rewrite the code. First prepare list of PIDs using whatever complex procedures you have, then send messages to those PIDs in one shot.
  2. Then sending messages to bunch of PIDs do temporarily boost performance of your thread so it won’t be preempted.


PidList = generate_pid_list(WhatEver),
OldPri = process_flag(priority, high), % Raise priority, save old one
lists:foreach(fun({Pid, Msg}) -> send_message(Pid, Msg) end, PidList),
process_flag(priority, OldPri)

That’s basically it. The result is that now I’m reliably achieving about 1.5k messages/sec with all stuff like HTTP/JSON parsing, ETS operations, logging, etc. I would like to pump this number few times higher, but at the moment that’s what I can get. I will come back when learn something new 🙂

PS. You may also find this discussion followed $1000 code challenge useful: