Skip to main content

Troubleshooting

Ditto won't sync

If you are having trouble synchronizing devices in Ditto, follow this guide.

If you continue to have problems, please email us at support@ditto.live or login to your Ditto account to chat with the support bot in the lower right corner of your screen. An engineer will reach out to you shortly.

Synchronization seems slow, or comes to a halt over time

Ensure that you are only creating a fixed number of live queries, with a smaller amount of data. Do not use findAll().

General diagnostics

  1. Set DittoLogger.minimumLogLevel = DittoLogLevel.VERBOSE before you initialize Ditto(identity).
  2. Look at the logs. Do you see any helpful errors or warnings?
  3. Verify that your app id is the same on all devices

Bluetooth

  1. Turn Use Location on
  2. Turn Bluetooth Scanning on
  3. Are permissions set correctly? See installation.
  4. Go to your OS-level permissions for Bluetooth and clear the app permissions for your application.
  5. Delete the app, install it again, and open it. Does it ask for Bluetooth permissions?
  6. Android only: are you calling ditto.refreshPermissions()?
  1. Are permissions set correctly? See installation.
  2. Go to your OS-level permissions and clear the app permissions for your application.
  3. Delete the app, install it again, and open it. Does it ask for location permissions?
  4. If you are using a custom TransportConfig, make sure you have enabled all peer-to-peer transports using enableAllPeerToPeer().

Local Area Network (LAN)

  1. Are permissions set correctly? See installation.
  2. Are both devices connected to the same WiFi network?
  3. Check your router settings and see the LAN troubleshooting guide.

Online Playground

Did you copy your playground token and App ID correctly?

  1. Login to the Portal
  2. Go to your App.
  3. Make sure that the portal playground token is the same as the value you are using in your code.

Online Playground enabled in the Ditto Portal

Did your device connect to the internet?

OnlinePlayground applications must connect to the Big Peer first before going offline. Read more about online playground.

If your app need full offline access without ever connecting to the Internet, please request an offline license token.

Do you have a firewall or proxy enabled that is blocking Ditto's connection to the Big Peer?

Verify that you can reach the following endpoints. You should see the output exactly as written below:

> nc -v MY_APP_ID.cloud.ditto.live 443Connection to MY_APP_ID.cloud.ditto.live port 443 [tcp/https] succeeded!^C
> curl -i https://MY_APP_ID.cloud.ditto.live/_ditto/auth/loginHTTP/1.1 405 Method Not AllowedDate: Fri, 30 Sep 2022 02:03:36 GMTContent-Type: text/plain; charset=utf-8Content-Length: 23Connection: keep-aliveAccess-Control-Allow-Origin: *Access-Control-Allow-Credentials: trueAccess-Control-Allow-Methods: GET, PUT, POST, DELETE, PATCH, OPTIONSAccess-Control-Allow-Headers: X-DITTO-ENSURE-INSERT,X-HYDRA-ENSURE-INSERT,X-DITTO-CLIENT-ID,X-HYDRA-CLIENT-ID,Accept,Referer,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization,X-Forwarded-ForAccess-Control-Max-Age: 1728000
HTTP method not allowed% 

If this test passes, next check to see if WebSockets are blocked on your machine. Some corporate networks, firewalls, or proxies block the HTTP upgrade packet that tells the WebSocket server to keep the connection alive. Check with your IT administrator to see if your computer is configured to block WebSocket connections.

Online with Authentication

  1. Did you follow the tutorial?
  2. Is the website address behind https and available on the open Internet?
  3. Verfiy that your server is reachable by the Big Peer at https://APP_ID.ditto.live and https://cloud.ditto.live.
  4. Are you implementing the authentication expiring soon delegate?
  5. Verify that your webhook provider name is correctly copied in the Ditto portal.

Offline Playground and Shared Key

  1. Login to the Ditto Portal.
  2. Check the value of your offline license token in the Ditto Portal. Is the offline license token expired?
  3. Is the offline license token copied correctly?
  4. Is the shared key copied correctly? Generate a new shared key.

Debugging Blocked Transactions

info

💡 This section only discusses blocked transactions on native platforms (e.g. iOS, Android, Windows, Linux). Ditto in web browsers operates sufficiently differently and isn’t covered here.

Blocked write transactions will automatically retry until they succeed. A blocked write transaction will never crash. Howewever, blocked write transactions are a common cause for poor database performance. Long running blocks are generally bad since they mean that nothing else can be writing to the database during this time. This could manifest itself as one of many problems:

  • Unresponsive UI: an interaction might try to update a document, but is blocked by an existing write transaction
  • Slow sync: new updates cannot be integrated into the store, since they’re blocked by another write transaction

A blocked write transaction can hint that something is wrong with the application code, or at a deeper level in Ditto. This page contains some tips & tricks to help understand the situation.

The process of unblocking is automatic and you don’t need to write any code to handle this. However, you can drastically reduce the chance of blocking transactions by making sure a device is only syncing the data it really needs.

Diagnosing Blocked Transactions

What is a “blocked” transaction?

At any given time, there can be only one write transaction. Any subsequent attempts to open another write transaction will become blocked until the existing write transaction finishes.

Read vs. Write Transactions

Read transactions operate differently to write transactions.

Read transactions cannot be blocked and can run in parallel with write transactions. Read transactions don’t block each other, don’t block write transactions, and aren’t blocked by write transactions.

If a write transaction is blocked, Ditto will log a message with increasing severity every 10s.

Time (t) a transaction has been blockedLog Level
t ≤ 30sDEBUG
31s < t ≤ 120sWARN
120s < tERROR

To see these logs in the database, it’s important to have a minimum log level set. Transactions that are blocked for over 2 minutes should always be visible in the logs.

If INFO level is used, then INFO, WARN, and ERROR messages will all be included in the logs. This means any write transactions blocking for more than 30s should always be visible in the logs.

Reading the Logs

If a write transaction is blocked, the device logs will look something like the following. In this example we can see a write transaction was blocked for a total of 150s (or 2.5 minutes).

204421276-4eee2f5d-806d-4d0d-af3e-d96a035d97a0.png

As time progressed, Ditto complained more and more loudly (starting with DEBUG logs before eventually logging at ERROR level). Eventually the existing transaction finished and blocked transaction was was able to proceed.

The write transaction which was blocked was for a Ditto internal component. This is identified by “originator=Internal”.

The existing, long-running write transaction which was causing the block was a user call in the public SDK. This is identified by “blocked_by=User”. So a user-level write transaction is blocking some internal workload. This is not necessarily a problem, as the internal system will catch up eventually.

Originators

The three values you’ll see for originator and blocked_by are:

Originator / Blocked ByDescription
“User"Any user-level API which modifies data.
“Internal"An internal job (presence data, DB maintenance, etc.).
“Replication"Updating the store with data received via replication.

Causes of Blocked Transactions

This section presents a few examples blocked transaction scenarios, how they would appear in logs, and what they might mean.

User blocks User

An application might block it's own write transactions by performing multiple writes at the same time in different places. If one is slow (perhaps it does too much work, or perhaps it reaches out to external APIs, etc.) then the other write transactions will block until it finishes.

// Thread/Queue 1 (starts first):{   // Somewhere in the app, a long running write transaction exists   ditto.store["people"].findByID(docID).update { mutableDoc in    // Most update tasks are quick, but a developer might    // doing something slow within the update block:    let apiData = getDataFromASlowExternalAPICall() // <-- !!!!            mutableDoc?["age"] = apiData.age    mutableDoc?["ownedCars"].set(DittoCounter())    mutableDoc?["ownedCars"].counter?.increment(by: apiData.count)   }}
// Thread/Queue 2 (starts second):{    // Somewhere else in the app, concurrently (e.g. background thread or queue)    // another write transaction tries to update a document.     //    // This will block until the "people" update block above completes.    let docID = try! ditto.store["settings"].upsert([        "_id": "abc123",    "preference": 31,    ])}

In logs, this scenario will look like the following:

LOG_LEVEL: Waiting for write transaction (elapsed: XXs), originator=User blocked_by=User

Note that “User” is blocking “User”. This could be temporary, but it could also be a deadlock which is much worse. See below.

User deadlocks User

// Start a write transaction:ditto.store["people"].findByID(docID).update { mutableDoc in    // Start a write transaction _within_ a write transacton.    // !! Deadlocks !!    let docID = try! ditto.store["settings"].upsert([        "_id": "abc123",        "preference": 31,    ]) 
    // ...}

You cannot start a write transaction within a write transaction. The result will be a deadlock which never resolves itself. This might manifest itself in the logs as:

LOG_LEVEL: Waiting for write transaction (elapsed: XXs), originator=User blocked_by=User

Note that User blocking User doesn’t necessarily mean a deadlock. It might merely be a long running write transaction and this situation might be expected depending on the task. However, if the transaction never unblocks and the log messages at ERROR level continue forever - you have a strong indication that there’s a deadlock and should investigate the application code.

Replication blocks User

// Somewhere in the app, a simple enough user write transaction is happening.// There's no other user write transaction happening concurrently in the// app, yet the update seems blocked or slow and UI might seems slugglish// or unresponsive.let docID = try! ditto.store["settings"].upsert([    "_id": "abc123",    "preference": 31,])

From the application code, it might not be obvious what the problem is. By looking in the logs, you might get a hint. For instance:

LOG_LEVEL: Waiting for write transaction (elapsed: XXs), originator=User blocked_by=Replication

Here we can see that replication is blocking our user’s write transaction. This might happen if we’ve just received a large amount of sync data from another peer. Most commonly, happens during initial sync (either with the Big Peer, or joining a mesh for the first time, or re-joining a mesh after an extended period away).

This scenario is something to be aware of, but will resolve itself automatically once the sync is complete. The user transaction will eventually unblock and continue when replication has finished updating the store.

Replication blocks Replication

If you see the following in logs, it’s an indication that one replication session with a remote peer is blocking another:

LOG_LEVEL: Waiting for write transaction (elapsed: XXs), originator=Replication blocked_by=Replication

This can happen when the device has received large amounts of data from multiple peers simultaneously and must update the database with the changes it received from each. Ditto can only update the database with sync data from remote peers one at a time, so the other updates must wait their turn.

This scenario is most likely to happen during a large initial sync - either with the Big Peer at the same time as with nearby devices, or just with multiple nearby devices when joining a mesh for the first time (or re-joining after an extended period away).

This scenario is something to be aware of, but will resolve itself automatically once the sync is complete. You can drastically reduce the chance of this type of blocking transaction by making sure each device is only syncing the data it really needs.

User or Replication blocks Internal

A long running write transaction might block an internal job. This scenario will be the least obvious to spot and it might not be obvious that it’s happening at all. It’s also the least likely to cause actual problems.

// Somewhere in the app, a long running write transaction existsditto.store["people"].findByID(docID).update { mutableDoc in    // Most update tasks are quick, but you might    // doing something slow within the update block    let apiData = getDataFromASlowExternalAPICall() // <-- !!!!
    mutableDoc?["age"] = apiData.age    mutableDoc?["ownedCars"].set(DittoCounter())    mutableDoc?["ownedCars"].counter?.increment(by: apiData.count)}

In logs, you might see the following:

LOG_LEVEL: Waiting for write transaction (elapsed: XXs), originator=Internal blocked_by=User

or:

LOG_LEVEL: Waiting for write transaction (elapsed: XXs), originator=Internal blocked_by=Replication

i.e. something is blocking Internal.

This might not have any observable effect, but if it does, it is most likely to manifest as slow/unreliable Ditto Bus connections, an inaccurate presence viewer, or slow/unreliable multihop replication. Ditto's internal mesh presence component must persist an update every 30s for these systems to keep working. If presence can’t do this because it’s blocked by another write transaction, those systems will begin to to fail until presence can successfully write its next update. This scenario should automatically recover just fine once the blocking write transaction finishes.