суббота, 20 декабря 2014 г.

Some obvious things about asynchronous network I/O

Async IO and async API is a two different things that often confused. First is what we usually want, second is what we doomed to deal with all the time. But these are not the same thing. You can achieve asynchronicity using only synchronous API but at the same time you can fail to do this using asynchronous calls. This can be illustrated with this boost.asio example:
link to source
It is obvious (for somebody who knows boost.asio API) that this code uses async I/O, `async_read_some` and `async_write` is asynchronous calls. This is a part of the server code. Server reads some data from socket asynchronously first and then it asynchronously writes response to the socket. All input and all output in this program is non-blocking but anyway - this server is synchronous because server can't write data to the socket until it reads something from it!

Yes, this is echo server, it works that way, but this pattern can be found in many "asynchronous" applications. One example - RPC system. You "call" method and your RPC library wraps arguments in a packet and sends it to RPC server. Now server can perform some processing and return error code sending another packet back. In this case no matter what API you use - synchronous or asynchronous, interaction between single client and server will be synchronous anyway.

The worst thing is that performance of such system will be limited by the network latency and not by the network bandwidth. Because each RPC call will result in network round-trip.

So, what's the conclusion?
  1. Don't be fulled by `async` buzzword, pay attention to system interaction (protocols), not to API being used to implement that interaction. 
  2. Design your protocol in such a way that can utilize high network bandwidth.
  3. And finally - do the crazy things! For example, you can perform your RPC calls without waiting for responses assuming that no error was occurred but if this is not the case - you can rollback changes that was made under the wrong assumptions. Or if you know that client will request some data from server with 99.(9)% probability, you can send this data without waiting.

Комментариев нет:

Отправить комментарий