My Favorite Erlang Pattern

Often a gen_server has what is essentially a synchronous interface but needs to deal with incoming message. This is tricky at first glance, using the gen_server:call blocks the process and prevents the intermediate process from doing what it needs to do. I've seen the following callback method used from the caller:

% Calling process

need_to_do_something() ->
    Callback = fun(Result) ->
        gen_server:cast(self(), {result, Result})
    end,
    receiver:do_something(Callback).

% Get the result back separately :c
handle_cast({result, Result}, State) ->
    io:format("Got data: ~p!", [Result]),
    {noreply, State}.

In the receiver:

% Recieving process.

do_something(Callback) ->
    gen_server:cast(target_process, {do_something, Callback}).

handle_cast({do_something, Callback}, State) ->
    {noreply, State#{callback => Callback}}.

% Handle incoming data.
handle_info(IntermediateData, #{callback := Callback} = State) ->
    Callback(IntermediateData),
    {noreply, State#{callback => undef}.

Which works but is a bit nasty. First you lose the gen_server:call's implicit monitoring and it is a but cumbersome to write. To get errors between the two processes you will need to link and then unlink them after. Further, its even worse when this is being performed from a process that is not a gen_server (like a Task). It has to be converted to a gen_server or receive the callback messages. Which is FINE but there is a gooder way.

Instead we can use what I am going to call 'delayed reply'. There is probably a real name for this, but I have trouble Googling for the right one. In essence the From in the handle_call interface can be saved for later, and the result sent using gen_server:send function. This way the caller is easily blocked while the receiver can deal with intermittent messages from a port or a TCP socket.

% Calling process

Result = receiver:do_something().
% Recieving process.

do_something() ->
    gen_server:call(target_process, do_something).

% Regular sync handler, save the caller.
handle_call(do_something, From, State) ->
    {noreply, State#{from => From}}.

% Handle incoming data.
handle_info(RandomData, #{from := From} = State) ->
    % Results are sent easily back c:
    gen_server:reply(from, RandomData),
    {noreply, State#{from => undef}.

Easier to read, easier to use, easier to write.

Update (04/14/2024)

I was asked what I used this for, as the above is pretty abstract. One point was called out: if there are multiple callers the above example is too simplistic and the callers will step on each other, leaving one hanging. Here are the two use-cases to clarify!

Port Communication

The first time I actually used this was quite simple. There was a Port process that would send an unknown amount of data to the caller. The intermediate process would buffer chunks of data until the final amount was reached, flatten the resulting binary, then reply back to the 'main' process for processing. The 'main' process was quite complex and didn't have an actual gen_server definition so it would have been tricky to build in the data accumulation. So this was a bit of a hack to get around the limitations of this 'main' process.

Here is a simple diagram.

Main Proc    Buffer Proc    Proc
--------     -----------    -----
|--- call--->|
|            |-----call---->|
|            |              |work
|            |<----data-----|
|            |<----data-----|
|            |<----data-----|
|            |<----DONE-----|
|<-flat data-|              |

Multiple Readers

The second use-case was for reading data from a socket with a queue of workers. Each worker would issue a call to a main process which maintained a queue of waiters. Once a full data chunk was received from the network it would then reply to the first caller. Which a caller was finished processing it would then re-call the caller to requeue itsself. This turned out to be an elegant way to deal with waiters, but it had the downside of not being able to hibernate each process as they waited. But again, circumstances, so it worked well enough.


598 Words

2024-02-21