SDK Guides
...
Swift
Quick Tips for Swift

Avoiding Excessive Memory Consumption

Ditto's documents have value semantics by design. This means that whenever you get a DittoDocument from the Ditto API, it's referencing an individual copy of the corresponding document data in memory, plus quite a bit of bookkeeping. To keep the memory footprint low, it is therefore crucial to release those blobs of data as early as possible, freeing the claimed memory space.

Special care needs to be taken whenever you spread the work across multiple queues or async APIs. It is very easy to end up with a lot of work items on a queue, each holding on to large amounts of data, such as big arrays of Ditto documents. This is not always obvious and leads to mysterious excessive memory consumption, eventually resulting in an out-of-memory crash, especially on mobile devices.

Live Queries​

Live queries are particularly prone to this problem. Consider the following typical example:

Swift


Depending on the amount of documents in the store and especially the rate at which those are updated, that observation callback might be called many times, each time with a fresh copy of the document data (due to value semantics) held in memory. Each of those dispatches will hold on to these document copies until the work item has a chance to run. If the documentProcessingQueue is unable to process the documents and release them fast enough, more and more documents are accumulated in memory waiting to be processed, resulting in excessive memory use.

Back Pressure

To deal with these situations, all of our APIs prone to this problem have more advanced variants allowing you to control the rate at which those callbacks are called. This mechanism is commonly referred to as Back Pressure (see for example here and here). Here is a much safer and more efficient way to implement the example above:

Swift


Since this particular pattern is so common, Ditto offers a convenient variant; however, it requires all work to be performed within the callback without dispatching onto different queues or using any async API that would hold on to the documents.

Swift


Since this particular pattern is so common, Ditto offers a convenient variant. But it requires all work to be performed within the callback without dispatching onto different queues or using any async API that would hold on to the documents

Swift


General Rule

All of this boils down to the following general rules:

Use observeLocal(deliverOn:) only if the received documents including the event are processed and can be released within that callback without dispatching onto other queues or using any async APIs. Otherwise, use observeLocalWithNextSignal(deliverOn:) and call the DittoSignalNext block after the received documents are fully processed and can be released.

This of course doesn't mean that you can never keep a reference to the documents and use or operate on them later on. In fact, a typical use-case would be to always keep the latest set of documents returned by a (live) query to display them in the UI or use otherwise. The important thing is to control the rate at which those are delivered and let Ditto know when you are ready to receive the next batch. This rule of thumb can help with that.

Implementation Details

The four variants shown in this article are as follows:

Swift


The former three are convenience methods and are implemented in terms of the latter:

Swift

Swift

Swift