The MongoDB Connector is currently available as a Helm deployed add-on for Operator managed Big Peer.

The MongoDB Connector provides seamless bidirectional synchronization between Ditto apps and MongoDB databases.

To learn more about how it works, see MongoDB Connector.

This page will guide you through installing the MongoDB connector for use on an Operator managed Big Peer.

Prerequisites

Before setting up the MongoDB Connector, ensure you have:

  1. Prepared a MongoDB Atlas Database
  2. Installed the Ditto Operator (version 0.3.0 or above)
  3. Deployed a Big Peer
  4. Created an App on your Big Peer
  5. Deployed Kafka

The examples in this guide will assume you’ve deployed on a kind cluster using our recommended kind config, but can be adjusted to suit your environment.

Preparing MongoDB Atlas

Follow the steps in the MongoDB Connector prerequisites to prepare your MongoDB Atlas for connection.

When whitelisting IPs in Atlas, ensure that you supply the public egress IP address that your Kubernetes pods use for outbound traffic.

This IP depends on how your Kubernetes cluster is configured (e.g. NAT gateways, cloud provider settings, or custom egress rules).

If deploying in kind locally, you can check your public IP with curl -4 https://ifconfig.me.

Deploying the Ditto Operator

Version 0.3.0 or above is required.

See Ditto Operator to get started with the Operator.

Deploying a Big Peer

Deploy a Big Peer using a BigPeer custom resource.

For example:

cat <<'EOF' | kubectl apply -f -
---
apiVersion: ditto.live/v1alpha1
kind: BigPeer
metadata:
  name: bp1
  namespace: ditto
spec:
  version: 1.43.0
  network:
    ingress:
      host: bp1.localhost
  auth:
    providers:
      onlinePlayground:
        anonymous:
          permission:
            read:
              everything: true
              queriesByCollection: {}
            write:
              everything: true
              queriesByCollection: {}
          sessionLength: 630000
          sharedToken: abc123
EOF

This creates a basic Big Peer we’ll reference for this guide, called bp1.

Creating an App

Create an App on your Big Peer using either the Operator API, or a BigPeerApp resource.

For example:

cat <<'EOF' | kubectl apply -f -
---
apiVersion: ditto.live/v1alpha1
kind: BigPeerApp
metadata:
  name: example-app
  namespace: ditto
  labels:
    ditto.live/big-peer: bp1
spec:
  appId: 2164bef3-37c0-489c-9ac6-c94b034525d7
EOF

Deploying Kafka

For your convenience, we’ve provided a Helm chart to deploy Kafka. You may need to change the baseDomain if you need to make the Kafka topics available over a specific domain you have an ingress for.

For this guide, we’ll assume you’ve deployed using the recommended Kind cluster deployment, and we’ll establish a path on localhost by setting baseDomain to kafka.localhost:

helm install kafka-connectors \
  oci://quay.io/ditto-external/kafka-connectors \
  --namespace ditto \
  --create-namespace \
  --set baseDomain=kafka.localhost

The naming of certain resources deployed depends on the name of the helm release. The rest of this guide will assume this release is named kafka-connectors.

Wait a few minutes for all the pods to be ready:

kubectl get pods -n ditto -l strimzi.io/cluster=kafka-connectors

NAME                                                READY   STATUS    RESTARTS        AGE
kafka-connectors-entity-operator-75d794565c-7486t   2/2     Running   0               3m
kafka-connectors-kafka-connectors-0                 1/1     Running   0               3m

Deploying the MongoDB Connector

The MongoDB Connector is deployed using the ditto-connectors Helm chart.

MongoDB Connection Details

The connection string and MongoDB database name obtained in the Preparing MongoDB Atlas steps need to be stored in a Kubernetes secret for the connector to read from:

kubectl create secret generic mongodb-connection \
  --namespace ditto \
  --from-literal=MONGODB_CONNECTION_STRING='mongodb+srv://username:password@cluster_endpoint/?retryWrites=true&w=majority&appName=Cluster0' \
  --from-literal=MONGODB_DATABASE='your_database_name'

Installation

Create a configuration file that specifies both the MongoDB connection details and collection mappings:

cat <<EOF > mongo-connector-values.yaml
# The App ID of your Big Peer App
appId: 2164bef3-37c0-489c-9ac6-c94b034525d7

# Enable MongoDB connector
mongoConnector:
  enabled: true
  # Reference the secret we created earlier
  secret_name: mongodb-connection
  # Configure collection mappings. Ensure collections already exist in the MongoDB database, otherwise the connector will error.
  collections: |-
    {
      "cars": {
        "fields": ["id"]
      }
    }

# Disable CDC components as we're not using them
cdcHeartbeat:
  enabled: false
streamSplitter:
  enabled: false

# The name that was given to the Big Peer in the 'BigPeer' resource. In this case, 'bp1'
bigPeerName: bp1

# The MongoDB connector needs to be configured with the Kafka cluster deployed in the first step
cdc:
  kafka:
    clusterName: kafka-connectors
EOF

The collections configuration specifies how MongoDB documents should be mapped to Ditto documents:

  • For each collection, you specify which fields should be used to create the Ditto document ID
  • You can use a single field (eg. "cars": {"fields": ["id"]})
  • Or multiple fields (eg. "cars": {"fields": ["id", "color"]} )
  • These fields must be immutable and always present in your MongoDB documents

For more guidance, see MongoDB Data Modelling Considerations

Installation

With your configuration values set, deploy the MongoDB Connector using:

helm install mongo-connector \
  oci://quay.io/ditto-external/ditto-connectors \
  --namespace ditto \
  --create-namespace \
  -f mongo-connector-values.yaml

After a few moments, you should see the MongoDB connector pod running:

kubectl get pods -n ditto -l app.kubernetes.io/instance=mongo-connector

NAME                                                        READY   STATUS    RESTARTS   AGE
cdc-2164bef3-37c0-489c-9ac6-c94b034525d7-665b85cdc9-b8gg8   1/1     Running   0          8m
mdb-2164bef3-37c0-489c-9ac6-c94b034525d7-fdbcf4645-grcz4    1/1     Running   0          8m

Verifying Integration

If the deployed pods are running, they’ve successfully established a connection with MongoDB.

To test that the MongoDB Connector documents are syncing correctly:

1

Insert a document via the HTTP API

If you haven’t already, follow the steps in Using the Big Peer HTTP API to create an API.

Example document insertion:

curl -X POST http://bp1.localhost/2164bef3-37c0-489c-9ac6-c94b034525d7/api/v4/store/execute \
  --header "Authorization: bearer YOUR_API_KEY" \
  --header "Content-Type: application/json" \
  --data-raw '{
    "statement": "INSERT INTO cars DOCUMENTS (:doc1)",
    "args": {
      "doc1": {
        "_id": {"id": "002", "locationId": "2345"},
        "color": "blue",
        "type": "suv"
      }
    }
  }'
2

Read from MongoDB Database

You should be able to see this document in your MongoDB database:

mongosh "mongodb://username:password@hostname:port/your_database_name"
> db.cars.find()

These steps can of course be performed in reverse, by inserting a document into the MongoDB database, and performing a DQL SELECT through the HTTP API.

Troubleshooting

Logs

You can check the logs of the MongoDB connector:

kubectl logs -n ditto -l app.kubernetes.io/instance=mongo-connector

These will contain information about connectivity issues between the connector and Mongo DB.

See Troubleshooting Connectivity for more.