If you’ve used the Bipsync Notes iOS app – or practically any other mobile app made since the iPhone’s debut – you’ll be familiar with the design paradigm in which content is presented in a scrollable list.
We employ this UI pattern right through our application, most notably when presenting to the user a list of notes within a given context. A context can be anything from a stock market ticker to a contact or an investment idea, but each serves to determine which notes should appear in the list.
In our code we define a class for each context, and these classes conform to a protocol which allows us to interrogate them for metadata that can be used to tailor the database queries that filter the note list. It’s an extensible, flexible design that has served us well for a number of years.
The Bad
Recently we found the app was sometimes slow to start, and in certain circumstances – usually while the app was also busy synchronising a large set of notes from the server – key features of the app such as the note editor sometimes appeared unresponsive to the user. We needed to quickly determine what was causing this performance issue and put a stop to it.
Luckily, Apple provide some excellent debugging tools for scenarios such as these. Using the Allocations instrument we were able to see that during a long, intensive sync with the server, the memory being consumed by the app was increasing such that given enough time it would eventually exhaust the device’s RAM. If this wasn’t bad enough, the Time Profiler instrument illustrated the app putting heavy demands upon the CPU of the device during the lifetime of the sync process.
Addressing these two issues would be key to further improving the app’s stability and responsiveness for our users.
Using Xcode’s built in Visual Memory Debugger we were able to determine that memory was constantly increasing because:
- The
NSOperation
queue that stores a sequence of operations to download data from our server was retaining substantial amounts of data - The same was true of the
NSManagedObjectContext
used in the background sync process - The
NSFetchedResultsController
which provides data to the note list was retaining thousands of Core Data managed objects
Additional inspection of stack traces revealed that the main culprit of our high CPU usage was a method call that made a query to the store to determine how many notes were in the current context; this was happening every time the NSFetchedResultsController
notified its delegate that an object has been inserted or removed from the store.
The Good
It was immediately clear from our debugging that many of these issues would be simple to address.
We began with the high CPU usage issue. We had to find a way to reduce the number of times we made that expensive count query. The query’s result is used to inform the user of the number of notes within the current context; we display the total in the navigation bar.
By performing that query each time a note was added or removed from the store we could be sure that the count was always accurate, but we hadn’t considered how this would play out during a sync, when hundreds of objects come and go each second.
The simple solution was to throttle the number of times that query could be made to a maximum of n seconds. After some experimentation we settled on two seconds, which seemed to be a sweet spot in terms of keeping the count fairly accurate while ensuring CPU usage stayed at a low level.
The throttle code is quite simple:
We use a category so the throttle method is available to any class in our codebase. This method employs a timer and a dictionary to make sure that it doesn’t call the same method more than once in the specified interval.
With that out the way, we moved on to the memory issues.
The Ugly
This is where things got tricky.
Initially we made quick progress. Looking at the issue where the NSOperations
we use in our sync engine were retaining data, we realised that because we daisy-chain the operations using INSOperationsKit‘s dependency functionality, the NSOperationQueue
that manages them had to wait until the last operation completed before it would release the entire object graph.
We updated our code to make sure that none of the operations retained data once they’d finished executing. This freed up a substantial amount of memory over the lifetime of a sync process.
Resetting the background context
Next we looked at the issue where the NSManagedObjectContext
used by the background thread that performs a sync wasn’t releasing the objects that it had instantiated. We were following the approach that is nicely summarised here and periodically calling [NSManagedObjectContext reset]
after a batch of records were imported, expecting that to allow them to be cleaned up by Core Data. However this did not appear to be happening – the number of objects registered in the context did not change after a reset.
We came across a post on Stack Overflow which posits that sometimes instances of NSManagedObject
have cyclic relationships that prevents them from being turned into a fault to free memory. It suggests manually refreshing each object in the collection to trim the object graph, at which point a reset is able to fully clear the context of registered objects. That seemed to work – our sync context no longer hung on to managed objects well after they’d been saved.
Managing the NSFetchedResultsController
And so to the most pernicious problem – why was the NSFetchedResultsController
that backs our note list retaining thousands of notes that we’d never asked it to load?
Before we delve into the answer, we need to explain how the fetched results controller is configured.
Because a context can often contain tens of thousands of notes, we can’t feasibly load all those into memory. We set a batchedFetchSize
to a reasonable value so only a subset of records are faulted into memory, and we also paginate the loading of the note list by manipulating a fetchLimit
on the NSFetchRequest
that queries for notes to load. Despite all this though, inspecting the memory debugger as a sync was ongoing revealed that thousands of notes were still being registered into the UI’s managed object context, despite us only showing a fraction of that number in the UI.
It seems that despite us setting up the NSFetchedResultsController
so that it only should concentrate on a handful of notes, objects that were being merged into the UI’s managed object context from the background sync’s managed object context were sticking around, even though they weren’t relevant. This occurred both when using the context’s automaticallyMergesChangesFromParent
property, and when we manually processed NSManagedObjectContextObjectsDidChange
notifications.
Indeed, it appears that a NSFetchedResultsController
only respects a fetch limit when making its initial request, and thereafter any inserted, updated or deleted objects satisfying the NSFetchRequest
‘s predicate will be loaded into the context . As this applies to many, if not all, notes that are imported during a sync, this goes some way to explaining our problem.
We tried many things to avoid these objects stacking up, to no avail. Eventually we came upon a solution, albeit a crude one. As the NSFetchedResultsController
calls controller:didChangeObject:atIndexPath:forChangeType:newIndexPath:
on its delegate as objects change, we check the UI context to see if its registered objects have exceeded a given threshold. If that’s the case we create a new NSFetchedRequestController
with an identical fetch request to the current one, and reload the table. This causes any notes that are not eligible for display in the list to be released from memory, which we claim back instantly.
We continue to investigate ways that we can avoid the build-up of objects in our contexts.

With our fixes in place the memory graph peaks at around 200mb. You can clearly see where our forced reload of the NSFetchedResultsController frees up memory.
Conclusion
Apple’s tools have made it easy for us to quickly address such performance issues. Sometimes the fault lies with our code, and sometimes with unexpected behaviour or bugs in Apple’s own frameworks. Either way, through fixes and smart workarounds we’re able to keep the app running smoothly even when importing substantial datasets.