Forging Leptos Query

Since falling down the Rust rabbithole, I've grown fond of Leptos, a bleeding edge full-stack framework for building fast web apps in Rust. In my time using React, my go-to async state management library has been Tanstack Query, and the Leptos ecosystem had no equivalent. So I decided to build Leptos Query.

The Dark Age of React

Tanstack Query (TSQ) is a library that will always have a soft spot in my heart. A framework agnostic tool that has the perfect level of abstraction to solve one problem - async state within a synchronous user interface.

Before TSQ, many engineers struggled with the client needing to accurately maintain a complex global state. Constant challenges included caching previous responses to avoid loading states, data being out of date, or changes made by another user not being reflected until a page refresh occurred. API responses were also commonly cached in Redux, you had to keep track of the request's execution, handle error states and retries, and more. It was a mess.

Redux Flashbacks Redux Flashbacks

But what exactly is the 'client state'? TSQ shifted the landscape, revealing that many things considered 'client state' were actually 'server state.' In many cases, the true state comes from a server's oracle of truth, not a faulty client-side state machine. This simplifies the problem, because now when in doubt you can just ask the server and use the response to render something in a UI.

This realization led to TSQ's revolutionary concept: The Query. A query represents the state of an asynchronous process that yields a result, which is bound to a unique key. It includes features such as SWR caching with background refetching, loading and error states, retries, request deduplication, refetch intervals, invalidation, and more. These are all common properties of asynchronous processes, and TSQ provides a powerful abstraction to manage them. Instead of trying to manage all of the complexity yourself by manually making an API request inside of a useEffect hook, you can just use TSQ's declarative useQuery hook.

Here are some of the benefits of using a Query Manager like Tanstack Query.

TSQ: Query Caching

Every query has a unique key to identify it. This key is used to store the query in the cache, and to retrieve it when needed. Given queries are bound to a unique key, TSQ will also automatically deduplicate requests. This means that if a query is already in flight, the system will not make another request, but instead wait for the existing request to complete. This is a powerful feature that ensures the client is not making unnecessary requests to the server.

One of the powerful features of TSQ is its ability to employ a configurable "Stale While Revalidate" (SWR) strategy for query caching. This approach drastically simplified data fetching, user experience, and ensures the client is always up to date with the latest data.

What is Stale While Revalidate (SWR)?

SWR is a cache invalidation strategy that allows the client to use stale, or slightly outdated, data while simultaneously fetching the latest data from the server. This approach provides an immediate response using cached data, followed by a seamless update once the fresh data is retrieved.

Benenfits of SWR

  • Caching: When a query is executed, the result is stored in the cache using it's unique key.

  • Background Refetch: If the cached data is considered stale, the system starts fetching the latest data from the server in the background, ensuring that the information displayed to the user is updated as soon as the fresh data is available.

  • Less Loading States: When a user requests data that's been previously fetched, the cached (possibly stale) data is immediately displayed. This ensures a responsive user experience, especially on subsequent page loads.

  • Seamless Transition: Once the latest data is fetched, the UI automatically updates, replacing the stale data with the latest fresh data.

TSQ: The Best of Dynamic Typing

TSQ makes beautiful tradeoffs between type safety and ergonomics, leveraging a mix of static and dynamic typing made possible by Typescript.

Each Query has an associated key. In TSQ, all keys are arrays.

The Query Cache is just a JavaScript object, where the keys are serialized Query Keys and the values are your Query Data plus meta information.

So in Typescript terms we can use the following type to represent a Query Cache. Keep in mind these are drastically simplified for the sake of this article.

type QueryCache = Map

type Query  = {
    data: any,
    // Meta information ...

Let's look at how this would look for an example query in React + Typescript, where we have a query that shows if a user likes a song or not.

// Query for a Track's Like Status.
const useTrackLikeQuery = (trackId: string): UseQueryResult => {
   return useQuery({
      queryKey: trackLikeQueryKey(trackId),
      queryFn: () => getTrackLike(trackId),

// Query Key.
const trackLikeQueryKey = (trackId: string): string[] => ['TrackLike', trackId]

// Query Fetcher.
const getTrackLike = async (trackId: string): Promise => {

The Query useTrackLikeQuery returns a UseQueryResult where the data being fetched is of type boolean, and the error type is unknown (one of the tragedies of Javascript).

We can see that the query key is an array of strings, where the first element is a label 'TrackLike' to differentiate this category of queries (e.g. Track likes in the Query Cache), and the second element is the trackId. This query key function guarantees that every query will have a unique slot in the cache.

It's important to note that the Type-safety is inferred from the invocation of useQuery in useTrackLikeQuery. The cache itself, has no notion of the type of data that's being stored.

TSQ: A Common Footgun

If you somehow manage to have non-unique query keys, you can have multiple query value types for the same key, and this can lead to runtime errors.

// Query for a Track.
const useTrackQueryConflict = (trackId: string): UseQueryResult => {
   return useQuery({
      // Duplicate Query Key!
      queryKey: trackLikeQueryKey(trackId),
      queryFn: () => getTrack(trackId),

type Track = {
    trackId: string,
    trackName: string,
    // ...

// Query Fetcher.
const getTrack = async (trackId: string): Promise => {

Note how we are using the same Query Key function trackLikeQueryKey for both useTrackLikeQuery and useTrackQueryConflict. This is a problem. If we are simultanously using useTrackLikeQuery and useTrackQueryConflict with the same trackId in our React App, we will very likely have a runtime error. This is because in one place we are expecting a boolean, and in another place we are expecting a Track object.

I want to emphasize that once you are aware of this footgun, this is NOT common in practice. It's easy to avoid by ensuring that you make your query keys unique. But it helps you understand the dynamicism of TSQ, and how it's leveraged to make the library so ergonomic.

Porting Tanstack Query to Rust

Now that we've covered how TSQ works, how can we implement an Async Query manager in Leptos, a Rust Web Framework?

The task is non-trivial, given how different Rust and JavaScript are. Rust is a compiled language known for its expressive type system with Algebraic Data Types and Traits, granular memory control, powerful macros, and concurrent programming capabilities. Meanwhile, JavaScript's interpreted, Just-in-time compiled, and dynamically-typed nature offers a simpler, practical approach to development, though it is often unsafe.

It's worth mentioning that TSQ's core implementation is framework agnostic and it provides integration wrappers for React, SolidJS, Vue, and Svelte. I don't have any such constraint, and can leverage Leptos' reactivity directly.

Comparing Leptos and React

  1. Rendering and Markup: Both Leptos and React employ declarative rendering with JSX/RSX markup languages.

  2. Virtual DOM vs. Reactivity: React uses a virtual DOM, follows specific rules for hooks to guarantee re-render stability. In contrast, Leptos champions fine-grained reactivity using Signals.

  3. Full-Stack Development: Leptos is designed to be full-stack and isomorphic, targeting WebAssembly and supporting server-side rendering. React is primarily a front-end library, though frameworks transform it into a full-stack solution. That said, having the Leptos' backend run in Rust makes SSR magnitudes faster and more efficient than any FullStack JS Framework.

  4. Maturity: React has been a standard in the JavaScript community since 2013, and Leptos is just a year old.

Dynamic Typing in Rust

Rust's type system is robust, but what if we want some of the flexibility of dynamic typing like in TSQ? Is there a way to have the best of both worlds?

Actually, yes! We can look into the std::any module, which has some neat tools for type reflection. One challenge for implementing a Query Manager in Rust is handling a lot of dynamic entries in one cache, each needing a unique Key and Value combination.

So I came up with a solution: the 'AnyMap' data structure. It's the backbone of Leptos Query, blending Rust's strong typing with the adaptability needed for today's web apps.

type AnyMap = HashMap>;

type TypeKey = (TypeId, TypeId);

struct CacheEntry(HashMap>);

The outer Map is indexed by a TypeKey, which is a tuple of two TypeIds. The first TypeId is the type of the Query Key, and the second TypeId is the type of the Query Value.

This guarantees that we will always get the correct type of data from the cache, which is a huge win for safety. This approach also lets you use the same key for different value types, which is extremely convenient.

The next thing to notice, is the Box. This is the magic that lets us store any type of data in the cache. The value is actually of type CacheEntry, but we use Box to store multiple instances of it in the same cache.

When we have a Box, we can use the downcast functions to get the inner value. This is a runtime operation, but it's safe because we know the type of the inner value. Though there is a cost to runtime reflection and dynamic dispatch associated with Box and downcasting, the developer ergonomics + safety + efficiency of caching, far outweighs the cost.

Here's the function at the core of the Query Client, showing how we extract the typed inner Map from the cache.

/// The Cache Client to store query data.
/// Exposes utility functions to manage queries.
pub struct QueryClient {
    pub(crate) cx: Scope,
    pub(crate) cache: Rc>,

impl QueryClient {

    /// Utility function to find or create a cache entry for the  combination, and then apply the function to it.
    fn use_or_insert_cache(
        // Function to apply to the cache entry.
        func: impl FnOnce((Scope, &mut HashMap>)) -> R + 'static,
    ) -> R
        K: 'static,
        V: 'static,
        // borrow the AnyMap!
        let mut cache = self.cache.borrow_mut();

        // Create the TypeKey.
        let type_key: TypeKey = (TypeId::of::(), TypeId::of::());

        // Find or create the cache entry.
        let cache: &mut Box = match cache.entry(type_key) {
            Entry::Occupied(o) => o.into_mut(),
            Entry::Vacant(v) => {
                let wrapped: CacheEntry = CacheEntry(HashMap::new());

        // Downcast the cache entry to the correct type.
        let cache: &mut CacheEntry = cache
                "Error: Query Cache Type Mismatch. This should not happen. Please file a bug report.",

        // Call the function with the cache entry.
        func((, &mut cache.0))

Leptos Resource - Primitive for Async Tasks

Leptos provides a Resource primitive to integrate async tasks into the synchronous reactive system.

Resources integrate with Suspense and Transition components to simplifiy the loading process and work with server side rendering. Reading the resource from within the registers that resource with the , and the fallback will be displayed until the resource is resolved.

Here's a Todo Example using the Resource primitive.

Let's define the following endpoint to get a Todo by ID.

use leptos::*;
use serde::*;

#[derive(Serialize, Deserialize, Clone)]
struct Todo {
    id: String,
    content: String,

// Don't do this in a real app! Just for demo purposes.
#[cfg(feature = "ssr")]
static GLOBAL_TODOS: RwLock> = RwLock::new(vec![]);

type TodoResponse = Result, ServerFnError>;

#[server(GetTodo, "/api")]
async fn get_todo(id: u32) -> Result, ServerFnError> {
    // Mimic a delay.
    let todos =;
    Ok(todos.iter().find(|t| == id).cloned())

Now let's use the endpoint in a component. This component will fetch a Todo from the server, and display it using a Resource. If the Todo is not found, it will display "Not Found".

fn TodoWithResource(cx: Scope) -> impl IntoView {
    let (todo_id, set_todo_id) = create_signal(cx, 0_u32);

    let todo_resource: Resource = create_resource(cx, todo_id, get_todo);

    view! { cx,

} }> {move || { todo_resource .read(cx) .map(|response| { match response.ok().flatten() { Some(todo) => todo.content, None => "Not found".into(), } }) }}
} }

If we have Resources, why do we need Queries?

Resources don't provide any caching natively. Meaning every time we mount a component, such as our , we will make a network request to fetch the data.

If you want to have caching, you have to manually lift the resource into a higher scope (closer to base of component tree). And every time the key changes, the resource will be re-fetched, so there's no caching per key, only per resource.

This involves a lot of unnecessary boilerplate, and becomes very tedious if you have many resources.

Here's a simple example:

// Root component for our Leptos Appo
fn App(cx: Scope) -> impl IntoView {
    let (todo_id, set_todo_id) = create_signal(cx, 0_u32);
    // Store the resource in a higher scope's context.
    let todo: Resource = create_resource(cx, todo_id, get_todo);
    provide_context(cx, todo):


fn TodoComponent(cx: Scope) -> impl IntoView {
    let todo_resource: Resource = use_context(cx).expect("No Todo Resource Found!");

    view! {cx,

} }> {move || { todo_resource .read(cx) .map(|response| { match response.ok().flatten() { Some(todo) => todo.content, None => "Not found".into(), } }) }}
} }

Leptos Query

Leptos Query uses Resources internally to be compatible with SSR and Suspense, provides a simpler API, SWR caching and many other niceties out of the box.

Here's an example. We are storing a CacheEntry of in the QueryClient's cache.

Given the response will be stored in the cache on a key u32 basis (the todo_id), any subsequent loads for a specific todo will not involved any foreground loading, and will be served from the cache. If the query is considered stale, the query will be re-fetched in the background, and the UI will be updated with the new reponse after it finalizes. Stale time is configurable using QueryOptions.

use leptos_query::*;

fn TodoComponentWithQuery(cx: Scope) -> impl IntoView {
    let (todo_id, set_todo_id) = create_signal(cx, 0_u32);

    let QueryResult { data, .. } = leptos_query::use_query(cx, todo_id, get_todo, QueryOptions::default());

    view! {cx,

} >


{move || { data.get() .map(|a| { match a.ok().flatten() { Some(todo) => todo.content, None => "Not found".into(), } }) }}
} }

QueryClient: Interacting with Query Cache directly

The QueryClient lets you interact with the query cache to invalidate queries, observe queries, and make optimistic updates.

Let's beef up our Todo Example a bit.

  1. We will add an endpoint and component to load all the todos.

  2. Add a form to create a new todo.

  3. Add an input to load a specific todo by id.

Starting with the server endpoints.

// Get all todos
#[server(GetTodos, "/api")]
pub async fn get_todos() -> Result, ServerFnError> {
    let todos =;

// Add a todo.
#[server(AddTodo, "/api")]
pub async fn add_todo(content: String) -> Result {
    let mut todos = GLOBAL_TODOS.write().unwrap();

    let new_id = todos.last().map(|t| + 1).unwrap_or(0);

    let new_todo = Todo {
        id: new_id as u32,



Now let's make a component to load all the todos.

fn AllTodos(cx: Scope) -> impl IntoView {
    let QueryResult { data, .. } = use_query(
        || (),
        |_| async move { get_todos().await.unwrap_or_default() },

    let todos: Signal> = Signal::derive(cx, move || data.get().unwrap_or_default());

    view! { cx,

"All Todos"


} }>
    "No todos"

    } } > {} " " {todo.content} } } />
} }

And another component for creating a Todo. Note how we're watching the response of the add_todo action. When the response is successful, we invalidate the query cache for the TodoResponse and Vec queries. This will cause any active queries to immediately refetch in the background, updating the cache and the UI.

fn AddTodo(cx: Scope) -> impl IntoView {
    let add_todo = create_server_action::(cx);

    let response = add_todo.value();

    let client = use_query_client(cx);

    create_effect(cx, move |_| {
        // If action is successful.
        if let Some(Ok(todo)) = response.get() {
            let id =;
            // Invalidate individual TodoResponse.

            // Invalidate AllTodos.
            client.clone().invalidate_query::<(), Vec>(());

    view! { cx,

Here's a demo.

Note how two request are initiatied as soon as a Todo is created. One for the TodoResponse and one for the Vec. Each of those responses then take a second to complete, and then you get the updated query.

Todo Invalidation Demo

If you really want the maximum speed, you can perform an optimistic update like this, which will immediately update the entry in the cache, and then refetch in the background (confirming the change with the server).

    create_effect(cx, move |_| {
        // If action is successful.
        if let Some(Ok(todo)) = response.get() {
            let id =;
            // Invalidate individual TodoResponse.

            // Invalidate AllTodos.
            client.clone().invalidate_query::<(), Vec>(());

            // Optimistic update.
            let as_response = Ok(Some(todo));
            client.set_query_data::(id, move |_| Some(as_response));

Optimistic Update Demo

It's imporatant to recognize how much legwork you'd have to do to get this behavior without a library like Leptos Query. You'd have to manually manage the cache, and refetch queries.

If you're curious and want to play around with it more, just checkout the example project

Invalidating Multiple Queries

You can invalidate groups of related queries by using QueryClient::invalidate_query_type.

let client = use_query_client(cx);
// Invalidates all queries of type `TodoResponse`, where key is `u32`.

// The queries below will be invalidated.
use_query(cx, || 1, get_todo, QueryOptions::default());
use_query(cx, || 2, get_todo, QueryOptions::default());

And you can also invalidate every query in the cache using QueryClient::invalidate_all_queries.

let client = use_query_client(cx);


This mimics the behavior of the invalidateQueries method in TSQ. It would use the label of the first entry in the Key Array.

let client = useQueryClient();

// Invalidate every query in the cache.
// Invalidate every query with a key that starts with `todo`
client.invalidateQueries({ queryKey: ['todo'] })

Thanks for Reading

Leptos Query is a powerful addition to the Leptos framework, providing a sleek way to manage asynchronous queries. By handling complexities like configurable SWR, background refetching, and query invalidation, it offers a streamlined developer experience that leans on the safety of strong typing and the flexibility of dynamic typing.

Built with Rust & Leptos

2024 Nico Burniske. All rights reserved.