Custom SwiftData Data Stores: Beyond the Default SQLite Backend
SwiftData’s default SQLite backend works for most apps, but the moment you need to persist to a JSON file, talk to a
REST API, or wrap an existing proprietary store, you hit a wall. Since iOS 18, the DataStore protocol gives you a
clean escape hatch — full SwiftData ergonomics with any storage backend you choose.
This post walks through the DataStore protocol, builds a working JSON-backed store from scratch, covers advanced
patterns like snapshot management and error handling, and lays out when a custom store is (and is not) the right call.
We will not cover CloudKit-specific sync — that has its own dedicated post.
Contents
- The Problem
- Understanding the DataStore Protocol
- Building a JSON-Backed Data Store
- Advanced Usage
- Performance Considerations
- When to Use (and When Not To)
- Summary
The Problem
Imagine you are building a Pixar movie catalog app. The design team wants the app to load its initial dataset from a bundled JSON file, let users add favorites while offline, and eventually sync everything to a custom backend — no CloudKit, no Core Data migration stack. With SwiftData’s default configuration, you get SQLite whether you asked for it or not.
import SwiftData
@Model
final class PixarMovie {
var title: String
var releaseYear: Int
var director: String
var boxOfficeMillions: Double
init(title: String, releaseYear: Int,
director: String, boxOfficeMillions: Double) {
self.title = title
self.releaseYear = releaseYear
self.director = director
self.boxOfficeMillions = boxOfficeMillions
}
}
// Default configuration — you get SQLite and nothing else
let container = try ModelContainer(for: PixarMovie.self)
This works fine until requirements shift. You cannot swap the SQLite file for a JSON document, intercept writes to push
them to a remote API, or inject a deterministic in-memory store for unit tests — at least not without dropping down to
Core Data’s NSPersistentStore subclass, which defeats the purpose of adopting SwiftData in the first place.
Apple addressed this gap in iOS 18 with the DataStore
protocol, introduced at WWDC 2024 in the session
Create a custom data store with SwiftData.
Understanding the DataStore Protocol
The DataStore protocol is SwiftData’s abstraction layer between ModelContext and the actual persistence mechanism.
When you call modelContext.save(), SwiftData does not write directly to SQLite. It serializes a
DataStoreSaveChangesRequest and
hands it to whatever DataStore implementation backs the container. Your job is to fulfill that contract.
The protocol requires three core capabilities:
- Fetch — respond to
DataStoreFetchRequestby returning matching snapshots. - Save — process inserts, updates, and deletes from a
DataStoreSaveChangesRequest. - Configuration — associate with a
DataStoreConfigurationthat theModelContainercan accept.
Here is the protocol surface you need to implement:
@available(iOS 18.0, *)
public protocol DataStore {
associatedtype Configuration: DataStoreConfiguration
associatedtype Snapshot: DataStoreSnapshot
init(_ configuration: Configuration,
migrationPlan: (any SchemaMigrationPlan.Type)?)
func fetch<T: PersistentModel>(
_ request: DataStoreFetchRequest<T>
) throws -> DataStoreFetchResult<T, Snapshot>
func save(
_ request: DataStoreSaveChangesRequest<Snapshot>
) throws -> DataStoreSaveChangesResult<Snapshot>
}
Note: The
DataStoreprotocol is available starting with iOS 18, macOS 15, watchOS 11, tvOS 18, and visionOS 2. If you need to support earlier OS versions, you must stick with the default SQLite store or drop to Core Data.
The Snapshot associated type is how SwiftData tracks the state of each model instance. The default store uses
DefaultSnapshot, and in most custom implementations you will use it too — it captures every persisted property of a
model as a dictionary of values keyed by property path.
Building a JSON-Backed Data Store
Let us build a complete, working JSONDataStore that persists our Pixar movie catalog to a JSON file on disk. This is
the same pattern you would adapt for a REST API, a Protobuf file, or any other format.
The Configuration
Every custom store needs a configuration type conforming to
DataStoreConfiguration. This is what you
pass into ModelContainer to tell SwiftData which store to use.
import SwiftData
@available(iOS 18.0, *)
final class JSONStoreConfiguration: DataStoreConfiguration {
typealias Store = JSONDataStore
var name: String
var schema: Schema?
var fileURL: URL
init(name: String, schema: Schema? = nil, fileURL: URL) {
self.name = name
self.schema = schema
self.fileURL = fileURL
}
static func == (lhs: JSONStoreConfiguration,
rhs: JSONStoreConfiguration) -> Bool {
lhs.name == rhs.name && lhs.fileURL == rhs.fileURL
}
func hash(into hasher: inout Hasher) {
hasher.combine(name)
hasher.combine(fileURL)
}
}
The Store typealias is the glue — it tells SwiftData which DataStore implementation to instantiate for this
configuration.
The Store Implementation
Here is the full JSONDataStore. We store model data as an in-memory dictionary keyed by PersistentIdentifier, and
serialize to JSON on every save.
@available(iOS 18.0, *)
final class JSONDataStore: DataStore {
typealias Configuration = JSONStoreConfiguration
typealias Snapshot = DefaultSnapshot
let configuration: JSONStoreConfiguration
private var storage: [PersistentIdentifier: DefaultSnapshot] = [:]
init(_ configuration: JSONStoreConfiguration,
migrationPlan: (any SchemaMigrationPlan.Type)? = nil) {
self.configuration = configuration
self.storage = Self.loadFromDisk(at: configuration.fileURL)
}
func fetch<T: PersistentModel>(
_ request: DataStoreFetchRequest<T>
) throws -> DataStoreFetchResult<T, DefaultSnapshot> {
let snapshots = storage.values.filter { snapshot in
snapshot.persistentIdentifier.entityName
== String(describing: T.self)
}
return DataStoreFetchResult(
descriptor: request.descriptor,
fetchedSnapshots: Array(snapshots)
)
}
func save(
_ request: DataStoreSaveChangesRequest<DefaultSnapshot>
) throws -> DataStoreSaveChangesResult<DefaultSnapshot> {
// Process inserts
for snapshot in request.inserted {
storage[snapshot.persistentIdentifier] = snapshot
}
// Process updates
for snapshot in request.updated {
storage[snapshot.persistentIdentifier] = snapshot
}
// Process deletes
for snapshot in request.deleted {
storage.removeValue(forKey: snapshot.persistentIdentifier)
}
try writeToDisk()
return DataStoreSaveChangesResult<DefaultSnapshot>(
for: self.configuration.name
)
}
}
The fetch method filters snapshots by entity name to return only the models matching the request type. The save
method applies inserts, updates, and deletes in order, then persists the entire store to disk.
Serialization Helpers
The disk I/O lives in a private extension. We encode and decode the snapshot dictionary using JSONEncoder and
JSONDecoder. In production you would want more robust error handling here, but the shape stays the same.
@available(iOS 18.0, *)
private extension JSONDataStore {
static func loadFromDisk(
at url: URL
) -> [PersistentIdentifier: DefaultSnapshot] {
guard FileManager.default.fileExists(atPath: url.path()),
let data = try? Data(contentsOf: url),
let decoded = try? JSONDecoder().decode(
[PersistentIdentifier: DefaultSnapshot].self,
from: data
) else {
return [:]
}
return decoded
}
func writeToDisk() throws {
let data = try JSONEncoder().encode(storage)
try data.write(to: configuration.fileURL, options: .atomic)
}
}
Tip: Use
.atomicwhen writing to disk. It writes to a temporary file first and renames on success, preventing data corruption if the app is killed mid-write.
Wiring It Into the App
With the configuration and store in place, plugging it into a SwiftUI app is a one-line change in how you create the
ModelContainer.
import SwiftUI
import SwiftData
@main
@available(iOS 18.0, *)
struct PixarCatalogApp: App {
let container: ModelContainer
init() {
let fileURL = URL.documentsDirectory
.appending(path: "pixar_catalog.json")
let config = JSONStoreConfiguration(
name: "PixarJSONStore",
schema: Schema([PixarMovie.self]),
fileURL: fileURL
)
container = try! ModelContainer(
for: PixarMovie.self,
configurations: config
)
}
var body: some Scene {
WindowGroup {
MovieListView()
}
.modelContainer(container)
}
}
From the view layer, nothing changes. @Query, modelContext.insert(), and modelContext.save() all work exactly as
they do with the default SQLite store. That is the entire point of the DataStore abstraction.
struct MovieListView: View {
@Query(sort: \PixarMovie.releaseYear)
private var movies: [PixarMovie]
@Environment(\.modelContext) private var context
var body: some View {
NavigationStack {
List(movies) { movie in
VStack(alignment: .leading) {
Text(movie.title).font(.headline)
Text("\(movie.releaseYear) — \(movie.director)")
.font(.subheadline)
.foregroundStyle(.secondary)
}
}
.navigationTitle("Pixar Catalog")
.toolbar {
Button("Add Toy Story") {
let movie = PixarMovie(
title: "Toy Story",
releaseYear: 1995,
director: "John Lasseter",
boxOfficeMillions: 373.6
)
context.insert(movie)
}
}
}
}
}
Advanced Usage
Filtering and Sorting in Fetch
The basic implementation above returns all snapshots for a given entity type and lets SwiftData handle predicate evaluation and sorting in memory. That works for small datasets, but for larger catalogs you should push filtering down into the store.
The DataStoreFetchRequest carries a FetchDescriptor with an optional predicate and sortDescriptors. You can
evaluate these against your snapshots directly:
func fetch<T: PersistentModel>(
_ request: DataStoreFetchRequest<T>
) throws -> DataStoreFetchResult<T, DefaultSnapshot> {
var snapshots = storage.values.filter { snapshot in
snapshot.persistentIdentifier.entityName
== String(describing: T.self)
}
// Apply fetch limit if specified
if let fetchLimit = request.descriptor.fetchLimit {
snapshots = Array(snapshots.prefix(fetchLimit))
}
return DataStoreFetchResult(
descriptor: request.descriptor,
fetchedSnapshots: Array(snapshots)
)
}
Warning: Predicate evaluation on
DefaultSnapshotvalues requires manual work — you are operating on raw property dictionaries, not typed model instances. For complex predicates, consider converting snapshots back to model values or using an indexed in-memory structure.
Thread Safety
The default SQLite store handles concurrent access internally. A custom store does not get that for free. If your app
uses multiple ModelContext instances or performs background saves, you need to synchronize access to your backing
storage.
The cleanest approach is to make your store an actor or protect the mutable state with a lock:
import os
@available(iOS 18.0, *)
final class JSONDataStore: DataStore {
// ... typealias declarations ...
private let lock = OSAllocatedUnfairLock<
[PersistentIdentifier: DefaultSnapshot]
>(initialState: [:])
func save(
_ request: DataStoreSaveChangesRequest<DefaultSnapshot>
) throws -> DataStoreSaveChangesResult<DefaultSnapshot> {
try lock.withLock { storage in
for snapshot in request.inserted {
storage[snapshot.persistentIdentifier] = snapshot
}
for snapshot in request.updated {
storage[snapshot.persistentIdentifier] = snapshot
}
for snapshot in request.deleted {
storage.removeValue(
forKey: snapshot.persistentIdentifier
)
}
let data = try JSONEncoder().encode(storage)
try data.write(
to: configuration.fileURL, options: .atomic
)
}
return DataStoreSaveChangesResult<DefaultSnapshot>(
for: self.configuration.name
)
}
}
Tip:
OSAllocatedUnfairLock(iOS 16+) is the modern replacement foros_unfair_lock. It avoids the known Swift concurrency pitfalls of usingos_unfair_lockdirectly from Swift code.
In-Memory Store for Testing
One of the most practical uses of a custom store is deterministic testing. You can create a trivial in-memory implementation that never touches disk:
@available(iOS 18.0, *)
final class InMemoryStoreConfiguration: DataStoreConfiguration {
typealias Store = InMemoryDataStore
var name: String
var schema: Schema?
init(name: String = "InMemoryStore", schema: Schema? = nil) {
self.name = name
self.schema = schema
}
static func == (lhs: InMemoryStoreConfiguration,
rhs: InMemoryStoreConfiguration) -> Bool {
lhs.name == rhs.name
}
func hash(into hasher: inout Hasher) {
hasher.combine(name)
}
}
@available(iOS 18.0, *)
final class InMemoryDataStore: DataStore {
typealias Configuration = InMemoryStoreConfiguration
typealias Snapshot = DefaultSnapshot
private var storage: [PersistentIdentifier: DefaultSnapshot] = [:]
let configuration: InMemoryStoreConfiguration
init(_ configuration: InMemoryStoreConfiguration,
migrationPlan: (any SchemaMigrationPlan.Type)? = nil) {
self.configuration = configuration
}
func fetch<T: PersistentModel>(
_ request: DataStoreFetchRequest<T>
) throws -> DataStoreFetchResult<T, DefaultSnapshot> {
let snapshots = storage.values.filter { snapshot in
snapshot.persistentIdentifier.entityName
== String(describing: T.self)
}
return DataStoreFetchResult(
descriptor: request.descriptor,
fetchedSnapshots: Array(snapshots)
)
}
func save(
_ request: DataStoreSaveChangesRequest<DefaultSnapshot>
) throws -> DataStoreSaveChangesResult<DefaultSnapshot> {
for snapshot in request.inserted {
storage[snapshot.persistentIdentifier] = snapshot
}
for snapshot in request.updated {
storage[snapshot.persistentIdentifier] = snapshot
}
for snapshot in request.deleted {
storage.removeValue(
forKey: snapshot.persistentIdentifier
)
}
return DataStoreSaveChangesResult<DefaultSnapshot>(
for: configuration.name
)
}
}
Now your tests get a clean, isolated store on every run with zero disk I/O and no leftover state:
func makeTestContainer() throws -> ModelContainer {
let config = InMemoryStoreConfiguration(
schema: Schema([PixarMovie.self])
)
return try ModelContainer(
for: PixarMovie.self, configurations: config
)
}
Custom Snapshot Types
For most use cases, DefaultSnapshot is sufficient. But the DataStore protocol lets you define a custom Snapshot
type conforming to DataStoreSnapshot if your
backend has different serialization needs — for example, if you are wrapping a Protobuf store or need to track
server-side revision numbers alongside each record.
struct VersionedSnapshot: DataStoreSnapshot {
var persistentIdentifier: PersistentIdentifier
var serverRevision: Int
var values: [String: Any]
// Simplified for clarity
}
This is an advanced escape hatch. Unless your backend demands custom metadata per record, prefer DefaultSnapshot and
avoid the additional conformance work.
Performance Considerations
Custom stores shift performance responsibility to you. Here is what to watch for:
Serialization cost. Our JSON store serializes the entire dataset on every save. For a movie catalog with a few
hundred entries this is trivial, but if you are persisting thousands of records with nested relationships, the
JSONEncoder pass becomes the bottleneck. Profile with Instruments’ Time Profiler to measure. For large datasets,
consider incremental writes — only serialize the changed snapshots and append them to the file, or switch to a more
efficient format like binary property lists or FlatBuffers.
Fetch performance. The naive implementation iterates every snapshot and filters by entity name. This is O(n) per
fetch. If your store holds multiple entity types and large volumes, maintain a secondary index — a
[String: Set<PersistentIdentifier>] keyed by entity name — so lookups become O(1) for the entity filter step.
Memory pressure. The in-memory dictionary holds every snapshot for the lifetime of the store. For apps with large datasets, consider a hybrid approach: keep a file-backed store and only load snapshots into memory lazily or in batches.
Apple Docs:
DataStore— SwiftData
Disk I/O on the main thread. If save() calls writeToDisk() synchronously and your view triggers a save from a
button tap, you are blocking the main thread. Move the disk write to a background queue or use ModelContext on a
background thread via ModelActor. The
WWDC 2024 session on custom data stores demonstrates this
pattern.
When to Use (and When Not To)
| Scenario | Recommendation |
|---|---|
| Standard CRUD app | Stick with the default SQLite store. It handles migrations, indexing, faulting, and CloudKit sync out of the box. |
| Bundled read-only dataset | A custom store is a strong fit. Load the bundled file on init, serve reads, ignore writes. |
| REST API as primary backend | Use a custom store to bridge SwiftData queries to network requests. Pair with a local cache for offline support. |
| Unit and UI testing | An in-memory custom store gives you deterministic, isolated persistence with zero disk I/O. |
| CloudKit sync required | Do not build a custom store. Use the built-in CloudKit integration. See SwiftData + CloudKit Sync. |
| Wrapping a legacy database | A custom store lets you present a SwiftData interface over any backend — SQLCipher, Realm, GRDB, or proprietary formats. |
The default store is production-hardened, battle-tested across millions of apps, and backed by Core Data’s decades of optimization. Only reach for a custom store when you have a concrete requirement that the default cannot satisfy.
Summary
- The
DataStoreprotocol (iOS 18+) decouples SwiftData’s model layer from its persistence backend, letting you replace SQLite with any storage mechanism. - A custom store requires three pieces: a
DataStoreConfigurationsubclass, aDataStoreconformance implementingfetchandsave, and serialization logic for your chosen format. - Thread safety is your responsibility — use
OSAllocatedUnfairLockor anactorto protect mutable state in concurrent environments. - In-memory custom stores are invaluable for testing — they provide deterministic, isolated persistence with zero disk I/O.
- Prefer the default SQLite store unless you have a specific requirement (bundled data, custom backend, testing isolation) that demands a custom implementation.
For tracking changes across multiple contexts and syncing with background processes, read SwiftData Persistent History and Change Tracking. If your app needs multi-device sync, see CloudKit and iCloud Sync for a comprehensive walkthrough.