PHP: Implementing proxy for keepalive connections using Nginx

Roman Ushakov
5 min readApr 22, 2024

How to decrease network time when using PHP?

The main advantage of PHP languages is the simplicity, as developers don’t have to worry about resource utilisation and memory leakages; PHP will destroy all objects and free all memory after each request execution.

Unfortunately, you pay for this simplicity with a performance penalty; each time your application receives a new request, you have to initialise everything from scratch. That includes various connections.

For some of the extensions, there are already persistent versions of such connections (php-amqp, PDO, etc.).

But what if we want to keep connections to remote APIs? Many services don’t close connections after execution, and clients can reuse open connections in order to avoid duplicate TCP and TLS handshakes (look for the header Connection: Keep-Alive).

However, as I mentioned above, PHP can’t share such connections between requests, and currently there is no way to make CURL handlers persistent with standard tools.

Configuring sidecar proxy

We can add a sidecar proxy that will be responsible for keeping connections alive.

The most common approach to run a PHP application in Kubernetes is to have a PHP-FPM + Nginx pod, so we already have Nginx that we can use for that purpose.

Nginx provides configuration for keep-live requests within the upstream block. It means that all remote services that we want to make requests to must be determined beforehand and added to the configuration.

upstream whentheycry.ru {
server whentheycry.ru:443;
# for simplicity, configure according to your needs
keepalive 1;
}

It’s better to create a separate server block with own port that is different from your fastcgi handler. That way our proxy will be available only from within the pod.

server {
listen *:8083;
server_name _;

location ~/(.*?)/(.*) {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $1;
proxy_set_header Connection "keep-alive";
proxy_http_version 1.1;
proxy_pass https://$1/$2;
}
}

Finally, your calls to the API from code now will look like the following: http://127.0.0.1:8083/whentheycry.ru/api/post/2

As you understand, the suburi which is before first / is responsible for determining upstream.

You may ask, Why use hostnames for upstream names?

There are certain restrictions related to how nginx treats hostnames from upstream blocks. During the proxying of the request to one of the specified upstreams, it actually resolves the IP address of the upstream and makes the request using this IP address. In some cases, remote services may utilise different logic depending on the domain name, which in our case is not specified.

For example, the public domain cat-fact.herokuapp.com can be resolved to 3.216.88.24. While accessing a domain in the browser opens a page with funny cats, using the IP address directly just renders an error.

Full configuration look as following

upstream whentheycry.ru {
server whentheycry.ru:443;
# for simplicity, configure according to your needs
# https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
keepalive 1;
}

upstream animechan.xyz {
server animechan.xyz:443;
# for simplicity, configure according to your needs
# https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
keepalive 1;
}

log_format proxy '[$time_local] "$request" $upstream_addr|$upstream_http_connection|$upstream_connect_time|$upstream_header_time|$upstream_response_time';

server {
listen *:8083;
server_name _;

gzip on;
gzip_comp_level 6;
gzip_types text/plain application/json;

access_log /var/log/nginx/connections.log proxy;

location ~/(.*?)/(.*) {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $1;
proxy_set_header Connection "keep-alive";
proxy_http_version 1.1;
proxy_pass https://$1/$2;
}
}

Benchmarking

The actual performance boost depends on how far your remote API is located. For example, here are metrics for calling a service that is located in the Netherlands from Tokyo.

Running https://whentheycry.ru/api/post/2
Executed request #0: tcp = 0.431877, tls = 0.755606, total = 1.429839
Executed request #1: tcp = 0.434394, tls = 0.803453, total = 1.506498
Executed request #2: tcp = 0.410752, tls = 0.741367, total = 1.34097
Executed request #3: tcp = 0.323339, tls = 0.628681, total = 1.276446
Executed request #4: tcp = 0.676683, tls = 1.098127, total = 1.968585
Executed request #5: tcp = 0.33116, tls = 0.665379, total = 1.286649
Executed request #6: tcp = 0.398673, tls = 0.715713, total = 1.360891
Executed request #7: tcp = 0.368654, tls = 0.789558, total = 1.415814
Executed request #8: tcp = 0.478557, tls = 0.794639, total = 1.451479
Executed request #9: tcp = 0.359838, tls = 0.763709, total = 1.376036
Finished all requests in 14.437201023102 seconds
========================
Running http://127.0.0.1:8083/whentheycry.ru/api/post/2
Executed request #0: tcp = 0.001341, tls = 0, total = 1.337778
Executed request #1: tcp = 0.001613, tls = 0, total = 0.472488
Executed request #2: tcp = 0.000648, tls = 0, total = 0.336202
Executed request #3: tcp = 0.001323, tls = 0, total = 0.321681
Executed request #4: tcp = 0.000472, tls = 0, total = 0.318372
Executed request #5: tcp = 0.001244, tls = 0, total = 0.312835
Executed request #6: tcp = 0.000435, tls = 0, total = 0.336835
Executed request #7: tcp = 0.002279, tls = 0, total = 0.351638
Executed request #8: tcp = 0.001315, tls = 0, total = 0.314239
Executed request #9: tcp = 0.001086, tls = 0, total = 0.32175
Finished all requests in 4.4288210868835 seconds

It takes 600–800 ms to establish an HTTP connection. By utilising keepalive connections, we have made our requests up to 1 second faster.

Alternatively, here is the case when two applications are located within one datacenter. Load testing with five virtual users for 15 minutes, the application makes one HTTP request to another microservice within the same datacenter.

Before:

avg=966.03ms min=706.52ms med=932.84ms max=2.23s p(90)=1.14s p(95)=1.22s

After:

avg=880.47ms min=659.8ms med=840.25ms max=2.16s p(90)=1.06s p(95)=1.12s

It saves around 100 ms, which is 8% of all execution time. The profit can be greater if the system consists of multiple microservices and multiple HTTP calls are made within a single business transaction.

You can find full prototype on GitHub

--

--

Roman Ushakov

I am software developer and just like to do cool stuff.. or at least I try :)