In this article, we are going to build a Reverse Proxy using Rama Framework.
More specifically, a TLS termination proxy from this example.
I’m writing this article for learning purposes, this is not meant to be a production-ready proxy. I’m a learner in the matter. I’m trying to learn about this framework and this topic at the same time.
If this material seems vague or not enough. I suggest you visit the Rama book and the Rama Documentation.
Another disclaimer, the code below is a copy/paste of the example in the Rama repository. And there are a lot of quotes from the documentation.
Rama
As the documentation page says, Rama is a modular service framework for the Rust language to move and transform your network packets.
Rama is async-first using Tokio as its only Async Runtime.
Reverse Proxy
According to the article “How To Set Up a Reverse Proxy (Step-By-Steps for Nginx and Apache)” written by Salman Ravoof, a reverse proxy is a server that sits in front of a web server and receives all the requests before they reach the origin server. It works similarly to a forward proxy, except in this case it’s the web server using the proxy rather than the user or client. Reverse proxies are typically used to enhance the performance, security, and reliability of the web server.
TLS Termination Proxy
This is what Dan Goodin says in his article “TLS Termination Proxy Explained: What Is It And How Does It Work?“ and posted on ProxyBros.
TLS (Transport Layer Security) is an encrypted protocol for secure data transactions. In 1999, the IETF( Internet Engineering Task Force) organization released the first version of TLS as an upgrade to the SSL 3.0 standard.
HTTP is the main data transaction protocol on the Internet. Initially unsecured, it became safe to use with SSL/TLS encryption.
TLS works as an encryption security add-on to HTTP. It sets the connection rules and calls cryptographic systems to secure the connection. The upgraded HTTP version is widely known as HTTPS.
TLS termination proxy is an active participant in the connection. It routes HTTPS traffic from client to server and interrupts the connection to read and analyze the traffic. To do this work, an intermediary breaks the tunnel between a client and a server into two parts: Client-Proxy, Proxy – Server. TSL connection stops on one side.
Requirements
Rust Installed (MSRV is 1.80)
WSL2 installed (Windows users)
Building the Proxy
First, we create our project, by running the following command:
cargo new tls_termination_example
Cargo.toml
[dependencies]
rama = { git = "https://github.com/plabayo/rama", branch= "main" , features = ["full"]}
tokio = { version = "1.42", features = ["full"] }
tracing = "0.1.41"
tracing-subscriber = { version = "0.3.19", features = ["env-filter"]}
use rama::{
graceful::Shutdown,
http::{server::HttpServer, Request, Response},
layer::{ConsumeErrLayer, GetExtensionLayer},
net::{
forwarded::Forwarded,
stream::SocketInfo,
tls::server::{SelfSignedData, ServerAuth, ServerConfig},
},
proxy::haproxy::{
client::HaProxyLayer as HaProxyClientLayer, server::HaProxyLayer as HaProxyServerLayer,
},
rt::Executor,
service::service_fn,
tcp::{
client::service::{Forwarder, TcpConnector},
server::TcpListener,
},
tls::{
boring::server::{TlsAcceptorData, TlsAcceptorLayer},
types::SecureTransport,
},
Context, Layer,
};
use std::{convert::Infallible, time::Duration};
use tracing::metadata::LevelFilter;
use tracing_subscriber::{fmt, prelude::*, EnvFilter}
In the code above, we import all the crates we will use for this project.
async fn http_service<S>(ctx: Context<S>, _request: Request) -> Result<Response, Infallible> {
let client_addr = ctx
.get::<Forwarded>()
.unwrap()
.client_socket_addr()
.unwrap();
let proxy_addr = ctx.get::<SocketInfo>().unwrap().peer_addr();
Ok(Response::new(
format!(
"hello client {client_addr}, you were served by tls terminator proxy {proxy_addr}\r\n"
)
.into(),
))
}
In the code above, we define the http_service
function. Which handles the HTTP connection. This function retrieves the client’s address and also the proxy address. Then, it sends a response with the message "hello client {client_addr}, you were served by tls terminator proxy {proxy_addr}\r\n".
#[tokio::main]
async fn main() {
tracing_subscriber::registry()
.with(fmt::layer())
.with(
EnvFilter::builder()
.with_default_directive(LevelFilter::DEBUG.into())
.from_env_lossy(),
)
.init();
let tls_server_config = ServerConfig::new(ServerAuth::SelfSigned(SelfSignedData::default()));
let acceptor_data = TlsAcceptorData::try_from(tls_server_config).expect("create acceptor data");
let shutdown = Shutdown::default();
...
}
In the code above, we initiate tracing_subcriber for logging with DEBUG
as a LevelFilter
. We configure the TLS server with ServerConfig
which is a common API to configure a TLS Server.
ServerAuth is the following enum.
pub enum ServerAuth {
SelfSigned(SelfSignedData),
Single(ServerAuthData),
CertIssuer(ServerCertIssuerData),
}
SelfSigned(SelfSignedData)
request the TLS implementation to generate self-signed single data.
TlsAcceptorData
is the internal data used as configuration/input for the super::TlsAcceptorService
, a Service
which accepts TLS connections and delegates the underlying transport stream to the given service.
...
shutdown.spawn_task_fn(|guard| async move {
let tcp_service = (
TlsAcceptorLayer::new(acceptor_data).with_store_client_hello(true),
GetExtensionLayer::new(|st: SecureTransport| async move {
let client_hello = st.client_hello().unwrap();
tracing::debug!(?client_hello, "secure connection established");
}),
)
.layer(Forwarder::new(([127, 0, 0, 1], 62801)).connector(
HaProxyClientLayer::tcp().layer(TcpConnector::new()),
));
TcpListener::bind("127.0.0.1:63801")
.await
.expect("bind TCP Listener: tls")
.serve_graceful(guard, tcp_service)
.await;
});
...
The code above configures middleware layers for handling TLS connections and logging client information forwards incoming connections to another service while preserving client information through HAProxy, and binds to a specified address and port for listening to the incoming connection.
What the documentation says about the HaProxyClientLayer
is that encodes and writes the HaProxy Protocol, as a client on the connected stream. The tcp
function creates a new HaProxyLayer
for the TCP protocol.
As the documentation says, spawn_task_fn
returns a Tokio crate::sync::JoinHandle
that can be awaited until the spawned task (fn) is completed. guard
returns a ShutdownGuard
which primary use is to prevent the Shutdown
from shutting down.
...
shutdown.spawn_task_fn(|guard| async {
let exec = Executor::graceful(guard.clone());
let http_service = HttpServer::auto(exec).service(service_fn(http_service));
let tcp_service =
(ConsumeErrLayer::default(), HaProxyServerLayer::new()).layer(http_service);
TcpListener::bind("127.0.0.1:62801")
.await
.expect("bind TCP Listener: http")
.serve_graceful(guard, tcp_service)
.await;
});
shutdown
.shutdown_with_limit(Duration::from_secs(30))
.await
.expect("graceful shutdown");
The code above creates an HTTP server, that configures an executor that allows graceful task handling and it binds to a TCP listener and serves requests using layered middleware.
The documentation says, HttpServer
is a builder for configuring and listening over HTTP using a Service
. Supported Protocols: HTTP/1, H2, Auto (HTTP/1 + H2). service
turn the HttpServer
into a Service
that can serve IO Byte streams (e.g. a TCP Stream) as HTTP.
HaProxyServerLayer
is a layer to decode the HaProxy Protocol and the layer
function wraps the given service with the middleware, returning a new service.
ConsumeErrLayer
a Layer
that produces ConsumeErr
services. See ConsumeErr
.
The shutdown_with_limit
returns a future that completes once the Shutdown
has been triggered and all ShutdownGuard
s have been dropped or the given Duration
has elapsed.
The resolved Duration
is the time it took for the Shutdown
to wait for all ShutdownGuard
s to be dropped.
Now, we run the command cargo run
.
In another terminal, we run the following command curl -v
https://127.0.0.1:63801
We should see the following message in the proxy terminal:
Now, we run the command: curl -k -v
https://127.0.0.1:63801
And we should the following message in the proxy’s output:
Completed main.rs file
use rama::{
graceful::Shutdown,
http::{server::HttpServer, Request, Response},
layer::{ConsumeErrLayer, GetExtensionLayer},
net::{
forwarded::Forwarded,
stream::SocketInfo,
tls::server::{SelfSignedData, ServerAuth, ServerConfig},
},
proxy::haproxy::{
client::HaProxyLayer as HaProxyClientLayer, server::HaProxyLayer as HaProxyServerLayer,
},
rt::Executor,
service::service_fn,
tcp::{
client::service::{Forwarder, TcpConnector},
server::TcpListener,
},
tls::{
boring::server::{TlsAcceptorData, TlsAcceptorLayer},
types::SecureTransport,
},
Context, Layer,
};
use std::{convert::Infallible, time::Duration};
use tracing::metadata::LevelFilter;
use tracing_subscriber::{fmt, prelude::*, EnvFilter};
async fn http_service<S>(ctx: Context<S>, _request: Request) -> Result<Response, Infallible> {
let client_addr = ctx
.get::<Forwarded>()
.unwrap()
.client_socket_addr()
.unwrap();
let proxy_addr = ctx.get::<SocketInfo>().unwrap().peer_addr();
Ok(Response::new(
format!(
"hello client {client_addr}, you were served by tls terminator proxy {proxy_addr}\r\n"
)
.into(),
))
}
#[tokio::main]
async fn main() {
tracing_subscriber::registry()
.with(fmt::layer())
.with(
EnvFilter::builder()
.with_default_directive(LevelFilter::DEBUG.into())
.from_env_lossy(),
)
.init();
let tls_server_config = ServerConfig::new(ServerAuth::SelfSigned(SelfSignedData::default()));
let acceptor_data = TlsAcceptorData::try_from(tls_server_config).expect("create acceptor data");
let shutdown = Shutdown::default();
shutdown.spawn_task_fn(|guard| async move {
let tcp_service = (
TlsAcceptorLayer::new(acceptor_data).with_store_client_hello(true),
GetExtensionLayer::new(|st: SecureTransport| async move {
let client_hello = st.client_hello().unwrap();
tracing::debug!(?client_hello, "secure connection established");
}),
)
.layer(Forwarder::new(([127, 0, 0, 1], 62801)).connector(
HaProxyClientLayer::tcp().layer(TcpConnector::new()),
));
TcpListener::bind("127.0.0.1:63801")
.await
.expect("bind TCP Listener: tls")
.serve_graceful(guard, tcp_service)
.await;
});
shutdown.spawn_task_fn(|guard| async {
let exec = Executor::graceful(guard.clone());
let http_service = HttpServer::auto(exec).service(service_fn(http_service));
let tcp_service =
(ConsumeErrLayer::default(), HaProxyServerLayer::new()).layer(http_service);
TcpListener::bind("127.0.0.1:62801")
.await
.expect("bind TCP Listener: http")
.serve_graceful(guard, tcp_service)
.await;
});
shutdown
.shutdown_with_limit(Duration::from_secs(30))
.await
.expect("graceful shutdown");
}
Final thoughts
Rama is a great alternative for creating proxy services, I liked its modularity and import only what you want to use. It uses Tokio and Tower, like other frameworks in the ecosystem.
Rama is highly in development, I was recommended to use the main branch for now to keep up with the rapid pace of changes. So, this means that this article will be obsolete rapidly. I suggest using the documentation, the Discord server, and the main repository to stay up-to-date with the changes.
What I liked the most was its community, to be able to run this server I had to ask for help in its Discord server, and one of the authors helped me with my problem. Also, the repo has many “good first issues” and they offer mentorship in many issues.
The source code is here.
Thank you for taking the time to read this article.
If you have any recommendations about other packages, architectures, how to improve my code, my English, or anything; please leave a comment or contact me through Twitter, or LinkedIn.
Resources
Rama Rama repositoryrepository
tls_boring_termination example
How To Set Up a Reverse Proxy (Step-By-Steps for Nginx and Apache)