I couldn’t find any way to measure request latency using a host connection pool. I could measure time between dispatching request into pool and receiving response from it, but then I end up with additional time spent on pool operations (establishing connections) logged as request latency.
This is the definition of my pool:
val connFlow = Http(actorSystem)
.newHostConnectionPoolHttps[Int](host = "foo.com", settings = settings)
I don’t see any way to get the info from the request, response or pool objects. Any help is much appreciated.
that’s not currently possible indeed. If we would measure such a value it would still of some questionable value as processing a request can contain those steps:
(maybe) look up the hostname on DNS
(maybe) establish a TCP connection
(maybe) establish a TLS connection
serialize the request
send data to the OS on the client
send data over the network
dynamically adjust transmission speeds between hosts
receive data on the server
deserialize request on the server
actually process the request
serialize response
… (everything in the other direction)
From the view of the client, most of those steps are opaque, so measuring complete end-to-end latencies seems more useful.
If you are particularly concerned about connection creation overhead, you can use akka.http.host-connection-pool.min-connections to let the pool pre-connect to the host. Afterwards, the connections should be “somewhat hot”. Even this won’t be perfect as a TCP connection doesn’t necessarily ramp up connection windows until enough actual data has moved back and forth (especially on high-latency, high-throughput connections).
That doesn’t mean that we might add some simple metrics like this to the pool at some point.
While measuring complete end-to-end latencies is useful, metrics and actually hooks for applications to trace connection related events are quite useful as well. Some other http clients, provide such capabilities: