ecton.dev

Introducing PliantDb

In my last post, I wrote about what inspired me to start working on PliantDb and my grandious visions for the project. Today, I reached an important milestone: I was ready to embark on beginning the client-server architecture. One of my goals in blogging more regulary is to use these posts as moments to reflect on progress, both to organize and share my thoughts. As such, I wanted to take a look at what the fundamental building block of PliantDb looks like: the local storage layer.

PliantDb is heavily inspired by CouchDB. If you're unfamiliar with it, don't worry, I'm going to explore some of the concepts behind it in this post.

A brief overview of CouchDB's fundamental concepts

CouchDB stores all documents in a database in a single collection of documents. You can ask for any document by using its unique ID. Because CouchDB's design is to expose a database as an HTTP server, a simple HTTP GET request to /databasename/documentid will retrieve the documents contents.

CouchDB adds functionality through design documents. Design documents are a collection of functionality including views, validations, triggers, and more. These design documents are simply JSON objects with Javascript functions providing the design document's behavior.

CouchDB embraces the idea that multiple design documents might share documents. If you find yourself writing views that only apply to a specific set of documents, CouchDB will still reindex all the views each time any document in that database is updated. Now, fear not: reindexing is not that big of a deal, it's incremental and very efficient. But, PliantDb will be taking a different approach (with different tradeoffs).

How PlaintDB stores data

Each database in PliantDb can contain one or more Collections of data. Each Collection currently is a definition of one or more map/reduce views, but additional functionality similar to Design Documents will be coming eventually.

The inspiration for this departure from CouchDB's approach was my real-world use case at my last business. It would have been nice to be able to physically divide the data inside of an individual customer's database instead of needing to create multiple databases to achieve the same division.

The second inspiration is to try to create a way for developers to ship pre-made functionality that can work with any PliantDb connection that supports their Collection. For example, the PliantDb server logs will use a crate that will expose a log backend for SirLog, making it easy to embed log aggregation into Cosmic Verge, PliantDb itself, and other projects. Another example might be the authentication framework we're going to write for Cosmic Verge -- it might be able to be used as a generic framework for other developers as well.

sled is the underlying storage mechanism, and because it is also based around BTrees, a lot of shortcuts were able to be taken by utilizing this mature and well-tested crate. Care just needs to be taken to encode your View keys using a byte encoding that supports comparing byte for byte. PliantDb takes care of this automatically for many types.

Digging through a real example

I wrote a couple of fairly well commented examples, but while they talk about the functionality, they don't talk about how it actually works. Let's take a look at some excerpts from view-examples.rs:

#[derive(Debug, Serialize, Deserialize)]
struct Shape {
    pub sides: u32,
}

...

impl Collection for Shape {
    fn id() -> collection::Id {
        collection::Id::from("shapes")
    }

    fn define_views(schema: &mut Schematic) {
        schema.define_view(ShapesByNumberOfSides);
    }
}

This is the definition of the Shape collection. Collection Ids need to be unique. Right now they're just strings, but I'm wanting to switch to something that encourages authors to use unique values so that if you want to switch logging frameworks, for example, they use unique sets of data rather than potentially causing issues because of mismatch data types in the same collection.

The second method is the next interesting part. It's where we tell PliantDb that the Shape collection has a view ShapesByNumberOfSides. Let's take a look at it:

#[derive(Debug)]
struct ShapesByNumberOfSides;

impl View for ShapesByNumberOfSides {
    type Collection = Shape;
    type Key = u32;
    type Value = usize;

View's use Rust's associated types. The Collection associated type is self-explanatory. Key and Value are related to the map() and reduce() functions. Let's look at the functions view has:

    fn version(&self) -> u64 {
        1
    }

    fn name(&self) -> Cow<'static, str> {
        Cow::from("by-number-of-sides")
    }

    fn map(&self, document: &Document<'_>) -> MapResult<Self::Key, Self::Value> {
        let shape = document.contents::<Shape>()?;
        Ok(Some(document.emit_key_and_value(shape.sides, 1)))
    }

    fn reduce(
        &self,
        mappings: &[MappedValue<Self::Key, Self::Value>],
        _rereduce: bool,
    ) -> Result<Self::Value, view::Error> {
        Ok(mappings.iter().map(|m| m.value).sum())
    }
}

Version and Name are used for reasons you'd expect -- if you update the functionality of the view, change the version number and PliantDb will re-index your view as-needed.

Map is where things get interesting. The first line of that method hides a lot of power provided by serde:

let shape = document.contents::<Shape>()?;

Under the hood, PliantDb is storing your data using CBOR, which is a binary storage format that has implementations in many languages. It does this using serde. If you were to dump the data from this database, instead of the JSON you'd receive from CouchDB, you'd receive CBOR. That being said, if you don't want to use serde, you can access Document.contents directly.

The next line is concise, but it contains the real meat of the implementation:

Ok(Some(document.emit_key_and_value(shape.sides, 1)))

This line is creating a Map with the number of sides contained in the Shape we deserialized as the "key" and 1 as the "value".

What PliantDb does behind the scenes is it builds a btree index using the key provided. More than one document can emit the same key, and when querying you'll receive multiple results back. Let's take a look at one of the queries from later in the example:

// At this point, our database should have 3 triangles:
let triangles = db
    .view::<ShapesByNumberOfSides>()
    .with_key(3)
    .query()
    .await?;

When executing this query, PliantDb will update the view's index if needed, and then find all entries with the key 3 and return them. query() returns the Map entries, but if you also want the documents, query_with_docs() will return MappedDocuments instead. There are also other methods than exact key matches to query the views, but let's move onto looking at what reduce() provides.

In the example, we're creating a bunch of shapes with different numbers of sides, and reduce is a terse one-liner:

Ok(mappings.iter().map(|m| m.value).sum())

reduce() is called periodically when PliantDb needs to take a list of mapped values and reduce them into a single value. In this example, each entry has a value of 1, meaning that if we have a list of entries, the sum() will yield the total count.

To be efficient, PliantDb also uses an approach similar to CouchDB, where reduce() is implemented with multiple levels. Each unique key in a View will have its own cached reduce() value, calculated by calling reduce() with a list of all the values from the individual entries. If a particular reduce query is executed that spans multiple keys, reduce() will be called with a list of all of those keys and values and the parameter rereduce will be true.

println!(
    "Number of quads and triangles: {} (expected 5)",
    db.view::<ShapesByNumberOfSides>()
        .with_keys(vec![3, 4])
        .reduce()
        .await?
);

This statement shows how a query for multiple keys works, and how a single usize is produced by calling reduce() on ShapesByNumberOfSides.

What makes a database trustworthy

I'm nearing a point where I could see myself using PliantDb in projects, but I'm not at the point I would have anyone else use it in their projects. One of the last keys to making me feel safe with this database was having a reliable backup pipeline. While I espect that clustering and replication will largely be how I deploy PliantDb long-term, knowing that I can restore data from a backup if something goes wrong is critical in the early days.

As such, I prioritized getting a local-database backup option working. I finished that up earlier today. There are two commands that are relevant:

pliantdb local-backup somedatabase.pliantdb save

That exports somedatabase.pliantdb to somedatabase.pliantdb.backup. There are options to customize the output location and name. The format is fairly straightforward. Let's use the shapes example:

view-examples.pliantdb.backup/
  shapes/
    0.0.cbor
    2.0.cbor
    ...

And, reloading works as easily:

pliantdb local-backup reloadtarget.pliantdb load view-examples.pliantdb.backup

What's next?

I'm ready to begin implementing the server architecture. Initially, the server will be a standalone server with replication support coming shortly behind. The idea is to support a manual failover to read replicas (or scripted if you want to go to the effort). After that is stabilized, clustering will be the last core component of supporting a true highly-available configuration.

I may, however, delay my work on clustering and instead focus on the apps that will be using these databases. The only way for me to feel comfortable suggesting other people use this library is once I've used it successfully myself for a reasonable amount of time. In a way, it's silly to strive for high-availability before the core has been proven in production.

Regardless, I hope there are a few people out there excited by the prospects of an easy-to-use, Rust-native document database.