We have seen many reverse proxies written in Rust being recently (as of time of writing) announced; we’re even building one of them.
This post will show you how some of the reverse proxies compare to each other, discussing capabilities, ease of use, and performance.
Note: as this comparison is performed by authors of one of the reverse proxies listed, the comparison might be biased.
The contenders
We have specifically chosen those contenders for the comparison
- Ferron (we’re building this one! 😄; https://github.com/ferronweb/ferron)
- Aralez (https://github.com/sadoyan/aralez)
- Sōzu (https://github.com/sozu-proxy/sozu)
- rpxy (https://github.com/junkurihara/rust-rpxy)
There’s also a list of programs written in Rust that can be used as HTTP reverse proxy servers in https://areweproxyyet.github.io/, but for this post, we’re focusing on select few proxies.
Ferron
From its README:
Ferron - a fast, modern, and easily configurable web server with automatic TLS
Since its README lists benefits rather than features, let us summarize the feature set of the very web server we’re building…
Ferron is a general-purpose web server that supports reverse proxying. It supports automatic TLS certificate management, load balancing, and even experimental HTTP/3.
Feature breakdown:
- Reverse proxy basics
- Simple config (
proxydirective)
- Simple config (
- Routing
- Path-based routing with
locationblocks
- Path-based routing with
- Static + proxy hybrid
- Serve static files + proxy in same config
- Protocols
- WebSockets (native)
- gRPC via HTTP/2
- Load balancing
- Multiple backends + passive health checks
- Caching
- In-memory caching
- Advanced proxy behavior
- Header rewriting
- URL sanitation / disabling
- Unix socket backends
Modules
- Reverse proxy, forward proxy, FastCGI/SCGI, auth forwarding, rate limiting
Aralez
From its README:
Aralez is a high-performance Rust reverse proxy with zero-configuration automatic protocol handling, TLS, and upstream management, featuring Consul and Kubernetes integration for dynamic pod discovery and health-checked routing, acting as a lightweight ingress-style proxy.
Aralez also supports dynamically reloading configuration, authentication (basic auth, API key or JWT), load balancing, and even a built-in file server. It also uses Pingora - the same library (that can be used for building reverse proxies) that’s powering Cloudflare’s infrastructure.
From ecosystem + available mentions:
- Built on Cloudflare Pingora (high-performance proxy framework)
- Focus on:
- High performance async proxying
- Modern Rust stack
- Likely features (based on Pingora ecosystem):
- HTTP proxying
- Extensibility (middleware-style)
- TLS support
Sōzu
From its README:
Sōzu is a lightweight, fast, always-up reverse proxy server.
Sōzu (with a bar above “o”… 😅) supports dynamically reloading configurations, upgrading without restarting and supports TLS.
Feature breakdown:
- Protocols
- TCP + HTTP reverse proxy
- Dynamic config
- Runtime updates via Unix socket (no reloads)
- Atomic diff-based config updates
- Zero-downtime
- Upgrades without dropping connections
- Load balancing
- RR, random, least-loaded, power-of-two
- Health-aware backend selection
- Architecture
- Multi-worker, single-threaded event loops (epoll)
- “Share-nothing” model (no locking)
- Security / TLS
- TLS termination via rustls
- Observability
- Metrics + logging via external systems (statsd, sockets)
rpxy
From its README:
rpxy: A simple and ultrafast reverse-proxy serving multiple domain names with TLS termination, written in Rust
This project is also, according to the README, a “work-in-progress project that is being evolved”.
From its README (again…):
The supported features are summarized as follows:
- Supported HTTP(S) protocols: HTTP/1.1, HTTP/2, and the brand-new HTTP/3 2
- gRPC is also supported
- Serving multiple domain names with TLS termination
- Mutual TLS authentication with client certificates
- Automated certificate issuance and renewal via TLS-ALPN-01 ACME protocol 3
- Post-quantum key exchange for TLS/QUIC 4
- TLS connection sanitization to avoid domain fronting 5
- Load balancing with round-robin, random, and sticky sessions
- HAProxy PROXY Protocol v1/v2 inbound support for recovering original client IP behind L4 proxies (e.g.,
rpxy-l4)- and more…
Feature breakdown:
- Protocols
- HTTP/1.1, HTTP/2, HTTP/3
- gRPC support
- TLS
- TLS termination + multi-domain hosting
- ACME (auto cert issuance/renewal)
- Mutual TLS (client cert auth)
- Post-quantum TLS (ML-KEM hybrid)
- Routing
- Host-based + path-based routing
- Performance
- Designed for high concurrency, low resource usage
- Other features
- Caching, load balancing
- Config
- Simple TOML configuration
The community
The community may be smaller for these reverse proxies than, for instance, NGINX or Caddy, but let’s see star counts as of time of writing the blog post:
- ferronweb/ferron - 1.9K+ GitHub stars
- sadoyan/aralez - 600+ GitHub stars
- sozu-proxy/sozu - 3.6K+ GitHub stars
- junkurihara/rust-rpxy - 600+ GitHub stars
This shows that for at least some of these reverse proxies, they have some community for administrators to rely on, in particular, Sōzu and Ferron.
The ease of use
Yes, the ease of use is important too in case of reverse proxies, as some system administrators would need a reverse proxy that just works.
Each reverse proxy has (slightly) different approach when it comes to installing and configuring it.
Ferron
Ferron has a “getting started” guide describing the basics of web servers and first web server configurations, for beginners.
Ferron’s documentatnion mentions multiple installation methods, including the installer for Linux, Docker, packages, and even manual installation. Ferron also has pre-built binaries available (and even an installer and packages we mentioned earlier), making it very easy to install it without having to install the Rust toolchain.
Ferron uses KDL-format configurations, for example this:
// Example configuration with reverse proxy. Replace "example.com" with your domain name.
example.com {
proxy "http://localhost:3000/" // Replace "http://localhost:3000" with the backend server URL
}You may notice the Ferron’s configuration might resemble Caddyfiles. Both Ferron and Caddy are known for its simple and less verbose configurations (at least for common use cases), making them simpler to configure.
Ferron doesn’t expose worker-related configuration though, but it provides sensible defaults, such as (from the documentation):
The WebSocket protocol is supported out of the box in this configuration example - no additional configuration is required.
When trying to run Ferron with an invalid configuration (in this case, tls directive with one value), it returns an error message like this:
Error while running a server: The `tls` configuration property must have exactly two values (at "localhost" host block)Aralez
Aralez has configuration-related documentation that describes that it uses two configuration files - main.yaml (startup parameters) and upstreams.yaml (upstream mappings), both YAML-format. The separation of configuration files might resemble Traefik (which also uses separate static and dynamic configurations), so you might be a bit easier to get used to Aralez configurations, if you used Traefik before.
Aralez also has pre-built binaries released on GitHub (example for v.0.85.1) for 64-bit x86 and ARM64 architectures on Linux, both GNU libc (dynamically linked) and musl libc (statically linked). This makes it easier to get started with it, without having to compile it yourself or downloading the Rust toolchain.
In addition, Aralez has a quick start guide on its documentation, guiding through the initial setup. However aside from Docker, there’s only manual installation mentioned in the documentation.
Also, after following a quick start guide, when trying to start Aralez, we got this Rust panic (due to the server misconfiguration):
thread 'main' (12412) panicked at src/web/start.rs:21:40:
called `Option::unwrap()` on a `None` value
note: run with `RUST_BACKTRACE=1` environment variable to display a backtraceIt would be better in our opinion if Aralez provided clear configuration error messages instead of Rust panics for misconfigurations.
Aralez also has several configuration options, such as threads (which would default to 1, we think it should default to the available parallelism to take advantage of multiple cores).
Sōzu
Sōzu has “getting started” guide that describes the installation process, which would be basically installing the Rust toolchain and compiling Sōzu (both with cargo install and from the source).
Sōzu uses TOML-format configuration files (its parameters are specified in a configuration-related documentation page). It also has a default configuration file in its GitHub repository. Though some of the defaults (such as 2 workers by default) could be improved too (for example to the available paralellism, to utilize multiple CPU cores)…
After compiling it, when invalid configuration is supplied into the web server, the error message looks like this:
failed to start Sōzu: failed to load config: Could not read file ../lib/assets/key.pem: No such file or directory (os error 2)
Error: StartMain(LoadConfig(FileRead { path_to_read: "../lib/assets/key.pem", io_error: Os { code: 2, kind: NotFound, message: "No such file or directory" } }))This error message specifies that a ../lib/assets/key.pem file doesn’t exist. At least it’s more user-friendly in our opinion than a Rust panic (like it’s the case in Aralez)…
The configuration files specify “clusters” with specified protocol, frontends and backends (frontend/backend separation might be similar to HAProxy’s, except in HAProxy, there can be different names for frontends and backends, while in Sōzu, they are in one “cluster”).
rpxy
rpxy also has a “getting started” guide, which mentions installation methods, such as building from source, packages or pre-built binaries. In particular, packages would allow rpxy to be very easily installed, without having to set up a Rust toolchain.
rpxy uses TOML-format configurations (and includes an example configuration). The configuration specifies listeners and backend apps. Though like Ferron, its configuration doesn’t expose worker-related configuration…
When trying to pass an empty TOML-format configuration, this appears in the output:
2026-03-23T18:15:06.414755Z INFO Start rpxy service with dynamic config reloader
2026-03-23T18:15:06.414898Z ERROR rpxy service exited: Invalid configuration: Either/Both of http_port or https_port must be specifiedThe error messages clearly specify that the HTTP or HTTPS port must be specified.
The performance
Now let’s get into the performance, where the things can get interesting…
The conditions
The performance benchmarks were performed on a PC with AMD Ryzen 7 8700G CPU and 32 GB of RAM, running Kubuntu 24.04.
File descriptor limits were increased using this command:
ulimit -n $(ulimit -n -H)The benchmark was performed using the script for HTTP/2:
#!/bin/bash
webserver='Ferron'
url='https://localhost:8443/'
# CSV first line
echo -n 'Web server,'
for concurrency in 1 $(seq 100 100 10000); do
echo -n $concurrency
if [ $concurrency -eq 10000 ]; then
echo
else
echo -n ','
fi
done
# CSV second line
echo -n "$webserver,"
for concurrency in 1 $(seq 100 100 10000); do
threads=$(nproc);
if [ $threads -gt $concurrency ]; then
threads=$concurrency
fi
tempresults=$(mktemp)
h2load -n $(($concurrency * 100)) -c $concurrency -t $threads $url > $tempresults
# The latency is mean latency.
echo -n "$(cat $tempresults | grep finished | cut -d',' -f2 | cut -d' ' -f2) [$(cat $tempresults | grep 'time for request' | awk '{ print $6 }')]"
if [ $concurrency -eq 10000 ]; then
echo
else
echo -n ','
fi
doneAnd for HTTP/1.x:
#!/bin/bash
webserver='Ferron'
url='https://localhost:8443/'
# CSV first line
echo -n 'Web server,'
for concurrency in 1 $(seq 100 100 10000); do
echo -n $concurrency
if [ $concurrency -eq 10000 ]; then
echo
else
echo -n ','
fi
done
# CSV second line
echo -n "$webserver,"
for concurrency in 1 $(seq 100 100 10000); do
threads=$(nproc);
if [ $threads -gt $concurrency ]; then
threads=$concurrency
fi
tempresults=$(mktemp)
h2load -n $(($concurrency * 100)) -c $concurrency -t $threads $url > $tempresults
# The latency is mean latency.
echo -n "$(cat $tempresults | grep finished | cut -d',' -f2 | cut -d' ' -f2) [$(cat $tempresults | grep 'time for request' | awk '{ print $6 }')]"
if [ $concurrency -eq 10000 ]; then
echo
else
echo -n ','
fi
done
❯ cat Benchmarks/rust-proxy-benchmark/benchmark-h2load-h1.sh
#!/bin/bash
webserver='Ferron'
url='https://localhost:8443/'
# CSV first line
echo -n 'Web server,'
for concurrency in 1 $(seq 100 100 10000); do
echo -n $concurrency
if [ $concurrency -eq 10000 ]; then
echo
else
echo -n ','
fi
done
# CSV second line
echo -n "$webserver,"
for concurrency in 1 $(seq 100 100 10000); do
threads=$(nproc);
if [ $threads -gt $concurrency ]; then
threads=$concurrency
fi
tempresults=$(mktemp)
h2load --h1 -n $(($concurrency * 100)) -c $concurrency -t $threads $url > $tempresults
# The latency is mean latency.
echo -n "$(cat $tempresults | grep finished | cut -d',' -f2 | cut -d' ' -f2) [$(cat $tempresults | grep 'time for request' | awk '{ print $6 }')]"
if [ $concurrency -eq 10000 ]; then
echo
else
echo -n ','
fi
doneBoth benchmarking wrapper scripts output CSV data.
The backend server used was a simple “Hello World” web application built with Axum, with code below:
use axum::{
routing::get,
Router,
};
use mimalloc::MiMalloc;
#[global_allocator]
static GLOBAL: MiMalloc = MiMalloc;
#[tokio::main]
async fn main() {
// build our application with a single route
let app = Router::new().route("/", get(|| async { "Hello, World!" }));
// run our app with hyper, listening globally on port 3000
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
axum::serve(listener, app).await.unwrap();
}The web server version were as follows:
- Ferron 2.6.0
- Aralez 0.85.1
- Sōzu 1.1.1
- rpxy 0.11.3
Web server configurations
Ferron:
globals {
default_http_port 8080
default_https_port 8443
}
localhost {
tls "/etc/certs/server.crt" "/etc/certs/server.key"
proxy "http://localhost:3000"
}Aralez (main.yaml):
# Main configuration file, applied on startup
threads: 16 # Number of daemon threads default setting
#runuser: pastor # Username for running aralez after dropping root privileges, requires program to start as root
#rungroup: pastor # Group for running aralez after dropping root privileges, requires program to start as root
daemon: false # Run in background
upstream_keepalive_pool_size: 500 # Pool size for upstream keepalive connections
pid_file: /tmp/aralez.pid # Path to PID file
error_log: /tmp/aralez_err.log # Path to error log
upgrade_sock: /tmp/aralez.sock # Path to socket file
config_api_enabled: false # Boolean to enable/disable remote config push capability.
config_address: 0.0.0.0:8000 # HTTP API address for pushing upstreams.yaml from remote location
proxy_address_http: 0.0.0.0:8080 # Proxy HTTP bind address
proxy_address_tls: 0.0.0.0:8443 # Optional, Proxy TLS bind address
proxy_certificates: /etc/certs # Mandatory if proxy_address_tls set, should contain a certificate and key files strictly in a format {NAME}.crt, {NAME}.key.
proxy_tls_grade: a+ # Grade of TLS suite for proxy (a+, a, b, c, unsafe), matching grades of Qualys SSL Labs
upstreams_conf: upstreams.yaml # the location of upstreams file
file_server_folder: /opt/storage # Optional, local folder to serve
file_server_address: 127.0.0.1:3002 # Optional, Local address for file server. Can set as upstream for public access.
log_level: info # info, warn, error, debug, trace, off
hc_method: HEAD # Healthcheck method (HEAD, GET, POST are supported) UPPERCASE
hc_interval: 2 #Interval for health checks in seconds
master_key: 910517d9-f9a1-48de-8826-dbadacbd84af-cb6f830e-ab16-47ec-9d8f-0090de732774 # Mater key for working with API server and JWT SecretAralez (upstreams.yaml):
# The file under watch and hot reload, changes are applied immediately, no need to restart or reload.
provider: "file" # "file" "consul" "kubernetes"
sticky_sessions: false
to_https: false
server_headers:
- "X-Forwarded-Proto:https"
- "X-Forwarded-Port:443"
upstreams:
localhost:
paths:
"/":
to_https: false
# client_headers:
# - "X-Proxy-From:Aralez"
healthcheck: false
servers:
- "127.0.0.1:3000"Sōzu:
log_level = "error"
log_target = "stdout"
command_socket = "./sozu.sock"
command_buffer_size = 16384
max_command_buffer_size = 163840
worker_count = 16
worker_automatic_restart = true
worker_timeout = 10
handle_process_affinity = false
max_connections = 500
buffer_size = 16393
activate_listeners = true
[[listeners]]
protocol = "http"
address = "0.0.0.0:8080"
[[listeners]]
protocol = "https"
address = "0.0.0.0:8443"
tls_versions = ["TLS_V12", "TLS_V13"]
cipher_list = [
# TLS 1.3 cipher suites
"TLS13_AES_256_GCM_SHA384",
"TLS13_AES_128_GCM_SHA256",
"TLS13_CHACHA20_POLY1305_SHA256",
# TLS 1.2 cipher suites
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
]
[clusters]
[clusters.MyCluster]
protocol = "http"
load_balancing = "ROUND_ROBIN"
frontends = [
{ address = "0.0.0.0:8080", hostname = "localhost", tags = { key = "value" }, path = "/api" },
{ address = "0.0.0.0:8443", hostname = "localhost", tags = { key = "value" }, certificate = "/etc/certs/server.crt", key = "/etc/certs/server.key" },
]
backends = [
{ address = "127.0.0.1:3000", backend_id = "the-backend-to-my-app" }
]rpxy:
listen_port = 8080
listen_port_tls = 8443
[apps."app_name"]
server_name = 'localhost'
tls = { tls_cert_path = '/etc/certs/server.crt', tls_cert_key_path = '/etc/certs/server.key' }
reverse_proxy = [{ upstream = [{ location = 'localhost:3000' }] }]Note: rpxy was running with log files that were symlinked to /dev/null (specifically, rpxy.log and access.log)
The results
Reverse proxy throughput (HTTP/1.x)
Higher is better | Benchmarks run on AMD Ryzen 7 8700G, 32GB RAM, with the h2load --h1 -n $(($CONCURRENCY * 100)) -c $CONCURRENCY -t 16 https://localhost command | Linux kernel version 6.14.0-27-generic
Reverse proxy latency (HTTP/1.x)
Lower is better | Benchmarks run on AMD Ryzen 7 8700G, 32GB RAM, with the h2load --h1 -n $(($CONCURRENCY * 100)) -c $CONCURRENCY -t 16 https://localhost command | Linux kernel version 6.14.0-27-generic
The benchmarks for HTTP/1.x show a clear separation in how the proxies perform under load. Sōzu stands out as the strongest overall performer, delivering the highest throughput by a wide margin and maintaining that performance even as concurrency increases. Its latency remains moderate and stable, which indicates it handles heavy traffic efficiently without significant slowdown.
Ferron and Aralez fall into a middle category. They provide decent and fairly consistent throughput, but at roughly half the level of Sōzu. As concurrency rises, their latency increases steadily, suggesting they might scale less effectively under heavier load.
rpxy behaves quite differently from the others. While it achieves the lowest latency, this comes at the cost of throughput, which drops sharply as concurrency increases. This indicates it cannot sustain high request volumes, and its low latency is largely a result of doing less work overall (maybe it’s caused by misconfiguration on our side?).
Reverse proxy throughput (HTTP/2)
Higher is better | Benchmarks run on AMD Ryzen 7 8700G, 32GB RAM, with the h2load -n $(($CONCURRENCY * 100)) -c $CONCURRENCY -t 16 https://localhost command | Linux kernel version 6.14.0-27-generic
Note: Sōzu does not seem to support HTTP/2
Reverse proxy latency (HTTP/2)
Lower is better | Benchmarks run on AMD Ryzen 7 8700G, 32GB RAM, with the h2load -n $(($CONCURRENCY * 100)) -c $CONCURRENCY -t 16 https://localhost command | Linux kernel version 6.14.0-27-generic
Note: Sōzu does not seem to support HTTP/2
For HTTP/2, the results show a more competitive field, but the overall patterns remain similar.
In terms of throughput, Aralez and Ferron perform at roughly the same level, both sustaining around 100k–120k requests per second across most concurrency levels. Aralez has a slight edge at lower concurrency, but the two converge as load increases, indicating broadly comparable scalability. In contrast, rpxy again degrades significantly under load, with throughput steadily dropping to around 20k–40k requests per second at high concurrency, making it the weakest performer in sustained workloads. Notably, Sōzu is absent here, suggesting it does not support HTTP/2 in this benchmark.
Looking at latency, rpxy once again reports the lowest response times, remaining in the single-digit to low teens millisecond range even at high concurrency. However, as with the HTTP/1.x results, this low latency corresponds to its much lower throughput. Ferron and Aralez both show steadily increasing latency as concurrency rises, with Aralez exhibiting slightly higher latency than Ferron at scale.
Conclusion
Each of the reverse proxies covered in this comparison brings something different to the table, and the right choice ultimately depends on your specific use case.
Sōzu stands out as the clear performance leader for HTTP/1.x workloads, delivering the highest throughput by a significant margin. However, its lack of HTTP/2 support may be a dealbreaker for modern deployments.
Ferron and Aralez offer a solid middle ground — competitive HTTP/2 performance, good ease of use, and a broader feature set including automatic TLS and load balancing. Ferron edges ahead slightly with its beginner-friendly documentation and cleaner error messages, while Aralez benefits from its Pingora-powered foundation and Consul/Kubernetes integration for dynamic environments.
rpxy, while showing the lowest latency figures, struggles to sustain throughput under higher concurrency. Maybe it’s caused by misconfiguration of the reverse proxy on our end?
In summary, if raw HTTP/1.x performance is your priority, Sōzu is hard to beat. For a more feature-complete and modern reverse proxy with strong HTTP/2 support, Ferron and Aralez are both worthy contenders. As the Rust ecosystem continues to mature, it will be exciting to see how these projects evolve and close the gap with more established solutions like NGINX and Caddy.