Step-by-Step Guide to Building a WebSocket Chat App with Axum and React
April 22, 2024 · 12 min
Table of Contents
In this guide, we’ll walk through the process of creating a full-stack chat app using WebSocket. Our backend will be built with Axum, a powerful Rust backend framework, and Shuttle, a development platform, while the frontend will be developed using React and Vite.
We’ll cover
Utilizing WebSocket in Axum and React.
Generating unique identifiers using nanoid.
Incorporating telemetry with tracing for enhanced logging.
You can find the complete code for this project on GitHub.
We can recv() from the socket and send() a message to it. Let’s see if the backend works properly using the frontend. Run the server by executing cargo shuttle run, and open the index.html in your browser. If succeeds, you can see some messages in the developer console.
To handle multiple connections and enable chat functionality, we need to implement a broadcast mechanism. Imagine that three clients have connections to the server. When client A sends a message, the server needs to broadcast the received message to all clients.
Every incoming connection is treated as an independent task, a process executed asynchronously by the Tokio runtime. Consequently, we need a way to facilitate data exchange among these tasks. Fortunately, Tokio offers the precise tool for this purpose: the broadcast channel.
We initialize a sender (or transmitter) and a receiver as follows:
1
2
let (tx, mut rx1) = broadcast::channel(16);
letmut rx2 = tx.subscribe();
In our scenario, each task must monitor the broadcast channel while handling client sockets. Hence, the broadcast transmitter tx needs to be shared as a state. Let’s proceed with implementing state sharing
The broadcast_tx is wrapped with Mutex and Arc to ensure safe sharing among multiple. As mentioned earlier, the handler must process data from two sources: the broadcast channel and the client. Let’s outline the implementation with the following code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
use futures_util::{
stream::{SplitSink, SplitStream},
SinkExt, StreamExt,
};
asyncfnhandle_socket(ws: WebSocket, app: AppState) {
let (ws_tx, ws_rx) = ws.split();
let ws_tx = Arc::new(Mutex::new(ws_tx));
{
let broadcast_rx = app.broadcast_tx.lock().await.subscribe();
tokio::spawn(asyncmove {
recv_broadcast(ws_tx, broadcast_rx).await;
});
}
recv_from_client(ws_rx, app.broadcast_tx).await;
}
The initial line splits the socket into a sender and a receiver. While not strictly necessary, this setup enables concurrent read and write operations on the socket and can enhance efficiency compared to locking the entire socket. The split() function is provided by the futures_util crate.
Let’s start by implementing recv_from_client. When a message arrives, we’ll forward it to the broadcast channel instead of returning it to the client:
1
2
3
4
5
6
7
8
9
10
11
12
13
asyncfnrecv_from_client(
mut client_rx: SplitStream<WebSocket>,
broadcast_tx: Arc<Mutex<Sender<Message>>>,
) {
whileletSome(Ok(msg)) = client_rx.next().await {
ifmatches!(msg, Message::Close(_)) {
return;
}
if broadcast_tx.lock().await.send(msg).is_err() {
println!("Failed to broadcast a message");
}
}
}
Now, let’s complete the implementation with recv_broadcast:
We initialize the WebSocket connection within a useEffect hook, ensuring it only happens once when the component mounts.
We set up a listener for incoming messages, updating the state with each new message received.
A form allows users to input and send messages, with the submit function handling the form submission by sending the message through the WebSocket connection.
With this implementation, our frontend is now fully functional.
Up until now, users can’t identify who sent each message. To address this, We’ll assign unique IDs to clients when they connect. We’ll use nanoid for this purpose.
Let’s get started with backend. We’ll define a sturct to represent a message:
Let’s experiment with tracing to improve the logging of our server.
In Rust, there are two main logging crates: log and tracing. While both provide logging interfaces, tracing offers more structured logging compared to log.
tracing revolves around three main concepts.
Span: Represents a time interval that contains events.
Event: A moment when something happened.
Subscriber: The component responsible for writing logs.
use tracing::{event, info, span, Level};
use tracing_subscriber::{fmt, prelude::*, EnvFilter};
fnmain() {
tracing_subscriber::registry()
.with(fmt::layer())
.with(EnvFilter::from_default_env())
.init();
let span =span!(Level::INFO, "my-span");
{
let _enter = span.enter();
event!(Level::INFO, "event 1");
event!(Level::WARN, "event 2");
let _ = add(5, 19);
}
}
#[tracing::instrument()]fnadd(a: i32, b: i32) -> i32 {
info!("inside add");
a + b
}
In this example, tracing_subscriber is initialized with some options. The span! macro creates a new span, and events occur within that span. The add function is decorated with instrument, a macro that automatically creates a new span every time the function is executed.
When executed by RUST_LOG=trace cargo run, the output will look something like this:
1
2
3
2024-04-22T02:53:36.184122Z INFO my-span: tracing_sample: event 1
2024-04-22T02:53:36.184180Z WARN my-span: tracing_sample: event 2
2024-04-22T02:53:36.184210Z INFO my-span:add{a=5 b=19}: tracing_sample: inside add
Each line represents an event, including date, time, log level, span name, and message.
In the above example, the environment variable RUST_LOG was set to specify logging configuration. The tracing_subscriber was initialized with EnvFilter::from_default_env(). Since the default log level is ERROR, we needed to specify a lower priority threshold to display logs.
To integrate tracing into our server and track client connections and disconnections, we can modify the handle_socket function:
use tracing::{error, info};
#[tracing::instrument(skip(ws, app))]asyncfnhandle_socket(ws: WebSocket, app: AppState, client_id: String) {
info!("connected");
let (ws_tx, ws_rx) = ws.split();
let ws_tx = Arc::new(Mutex::new(ws_tx));
if send_id_to_client(&client_id, ws_tx.clone()).await.is_err() {
error!("disconnected");
return;
}
{
let broadcast_rx = app.broadcast_tx.lock().await.subscribe();
tokio::spawn(asyncmove {
recv_broadcast(ws_tx, broadcast_rx).await;
});
}
recv_from_client(&client_id, ws_rx, app.broadcast_tx).await;
info!("disconnected");
}
We’ve added instrument and some logging to the handle_socket function. The initialization code is automatically handled by Shuttle.
The output will resemble:
1
2
3
4
2024-04-21T00:00:01.665-00:00 [Runtime] Starting on 127.0.0.1:8000
2024-04-21T00:00:04.335-00:00 [Runtime] INFO handle_socket{client_id="6khXi"}: fullstack_wschat::web_socket: connected
2024-04-21T00:00:04.348-00:00 [Runtime] INFO handle_socket{client_id="C-2r0"}: fullstack_wschat::web_socket: connected
2024-04-21T00:00:04.423-00:00 [Runtime] INFO handle_socket{client_id="6khXi"}: fullstack_wschat::web_socket: disconnected
Although our project may not fully demonstrate the significance of tracing due to its size, this example provides a foundation for understanding its utility.
In this post, we provided an overview of using WebSocket and building a full-stack application with Axum and React. We explored enhancements such as implementing broadcast functionality with Tokio’s broadcast channel, integrating client IDs for user identification, and leveraging tracing for improved logging.