MongoDB Connector
The MongoDB Connector is currently available as a Helm deployed add-on for Operator managed Big Peer.
The MongoDB Connector provides seamless bidirectional synchronization between Ditto apps and MongoDB databases.
To learn more about how it works, see MongoDB Connector.
This page will guide you through installing the MongoDB connector for use on an Operator managed Big Peer.
Prerequisites
Before setting up the MongoDB Connector, ensure you have:
- Prepared a MongoDB Atlas Database
- Installed the Ditto Operator (version 0.3.0 or above)
- Deployed a Big Peer
- Created an App on your Big Peer
- Deployed Kafka
The examples in this guide will assume you’ve deployed on a kind
cluster using our recommended kind
config, but can be adjusted to suit your environment.
Preparing MongoDB Atlas
Follow the steps in the MongoDB Connector prerequisites to prepare your MongoDB Atlas for connection.
When whitelisting IPs in Atlas, ensure that you supply the public egress IP address that your Kubernetes pods use for outbound traffic.
This IP depends on how your Kubernetes cluster is configured (e.g. NAT gateways, cloud provider settings, or custom egress rules).
If deploying in kind
locally, you can check your public IP with curl -4 https://ifconfig.me
.
Deploying the Ditto Operator
Version 0.3.0
or above is required.
See Ditto Operator to get started with the Operator.
Deploying a Big Peer
Deploy a Big Peer using a BigPeer
custom resource.
For example:
This creates a basic Big Peer we’ll reference for this guide, called bp1
.
Creating an App
Create an App on your Big Peer using either the Operator API, or a BigPeerApp
resource.
For example:
Deploying Kafka
For your convenience, we’ve provided a Helm chart to deploy Kafka. You may need to change the baseDomain
if you need to make the Kafka topics available over a specific domain you have an ingress for.
For this guide, we’ll assume you’ve deployed using the recommended Kind
cluster deployment, and we’ll establish a path on localhost by setting baseDomain
to kafka.localhost
:
The naming of certain resources deployed depends on the name of the helm release. The rest of this guide will assume this release is named kafka-connectors
.
Wait a few minutes for all the pods to be ready:
Deploying the MongoDB Connector
The MongoDB Connector is deployed using the ditto-connectors
Helm chart.
MongoDB Connection Details
The connection string and MongoDB database name obtained in the Preparing MongoDB Atlas steps need to be stored in a Kubernetes secret for the connector to read from:
Installation
Create a configuration file that specifies both the MongoDB connection details and collection mappings:
The collections
configuration specifies how MongoDB documents should be mapped to Ditto documents:
- For each collection, you specify which fields should be used to create the Ditto document ID
- You can use a single field (eg.
"cars": {"fields": ["id"]}
) - Or multiple fields (eg.
"cars": {"fields": ["id", "color"]}
) - These fields must be immutable and always present in your MongoDB documents
For more guidance, see MongoDB Data Modelling Considerations
Installation
With your configuration values set, deploy the MongoDB Connector using:
After a few moments, you should see the MongoDB connector pod running:
Verifying Integration
If the deployed pods are running, they’ve successfully established a connection with MongoDB.
To test that the MongoDB Connector documents are syncing correctly:
Insert a document via the HTTP API
If you haven’t already, follow the steps in Using the Big Peer HTTP API to create an API.
Example document insertion:
Read from MongoDB Database
You should be able to see this document in your MongoDB database:
These steps can of course be performed in reverse, by inserting a document into the MongoDB database, and performing a DQL SELECT
through the HTTP API.
Troubleshooting
Logs
You can check the logs of the MongoDB connector:
These will contain information about connectivity issues between the connector and Mongo DB.
See Troubleshooting Connectivity for more.