Backend Architecture

Cloud-Optional Design

The realtime end-user transactional data generated by your app can be disseminated across devices through three different methods:

  • Centralized — A Small Peer device establishes a direct internet-enabled connection to the Big Peer cloud deployment.
  • Decentralized — A Small Peer device establishes a mesh network connection with nearby Small Peer devices using any and all communication transport types available to them by default.
  • Hybrid — If one or multiple Small Peer devices connected in the mesh network gain access to the internet, not only do those devices with internet access upload their local Ditto store to the Big Peer cloud, but every nearby offline Small Peer device as well.

As a cloud-optional data management platform, you get to choose the distributed system architecture that best meets your unique goals and use cases.

In this topic, you'll learn about the three characteristics of distributed systems and how Ditto offers a flexible data experience by blending the best qualities of the transactional edge (Small Peers) and analytical cloud (Big Peer) in a single distributed system architecture:

Design Tradeoffs

In distributed systems, in which a network stores data on more than one node, it is impossible to guarantee that all CAP theorem fields — consistency, availability, and partition tolerance — are met. In plain language, you can’t have it all when designing a decentralized distributed system architecture.

In addition, when contending with asynchronous communications, in which unstable network conditions may cause data delays, reordering, or loss, guaranteeing partition tolerance is a must. Therefore, you must choose to sacrifice high availability or strong consistency in your architecture model.

Also referred to as Brewer's Theorem, the following graphic illustrates this CAP theorem logic:

Document image


Consistency Considerations

Within Ditto's underlying infrastructure are the following two distinct consistency models that work in tandem to achieve reliable data management. A consistency model establishes guarantees regarding the order and visibility of updates across the system.

The following table provides an overview of the two types of consistency models Ditto provides:

Type

Description

Eventual consistency

Enforced by Small Peers, all updates propagate across distributed replicas (local Ditto stores) as a single meaningful value, eventually.

Causal consistency

Enforced by the Big Peer, data updates are available in real-time, without delay.

Comparing Consistency Models

When compared to the Small Peer eventually consistent model, the Big Peer causal consistency model is considered to be much more straightforward to implement in terms of ease.

This is due to the causal consistency model’s relaxed approach to immediate consistency guarantees, which are susceptible to concurrency conflict.

Eventually Consistent and Conflict-Free

SmallPeers enforce strong eventual consistency by way of conflict-free replicated data types (CRDTs) technology.

As the foundation of how Ditto exposes and models data, CRDTs ensure that any data inconsistencies that occur as a result of concurrency conflicts eventually merge into a single value. A concurrency conflict is a simultaneous update made to the same data items stored in different database replicas.

Ditto’s CRDTs are state-based, which means that only the data that has changed, known as the delta, replicates across peers in the mesh network. This paradigm ensures highly frequent and efficient peer-to-peer data transmission. 

Strong Causal Consistency

With the principles of eventual consistency, it seems like anything is allowed to happen: if two actions are totally unrelated, they can be ordered any way the system chooses.

This is the opposite of the causally consistent model, which, as the name implies, entails that if one action occurs before another and therefore influences that other action, the two actions must be ordered in that exact same sequence for every execution.

As an example, imagine that you have two collections: Menus and Orders:

  1. First, you add a new item to the Menu, and then you create an Order that points to your new Menu item.
  2. When partitioned and replicated across the distributed database nodes, these two independent actions merge in a different order.
  3. As a result, some peers are unable to observe the new menu item referenced in the order.

Causal consistency ensures that the menu item is added before the order is created, regardless of the unpredictability and uncertainty of networks, connections, and ordering of messages.

Transactional causal consistency means that, as long as these independent operations are written to the same Ditto store, you can apply this constraint across any number of related changes, across multiple documents, and in multiple collections.

This model is much simpler to understand than eventual consistency, leading to fewer technical surprises during development, implementation, and operation.