@concurrent and the Single-Threaded Default: Swift 6.2's Concurrency Shift
You wrote a perfectly correct async function in Swift 6.0. It fetched data, parsed JSON, and returned a model. Six
months later you recompile under Swift 6.2 and discover it now runs on the main thread. Nothing in your code changed —
the language’s execution model did.
This post dives deep into the single-threaded default that Swift 6.2 introduces for async functions, the @concurrent
attribute that opts back into cooperative-pool execution, and the reasoning behind the shift. We will not rehash the
broader “approachable concurrency” feature set — if you need that context, read
Swift 6.2 Approachable Concurrency first. Here we focus exclusively on
where your async code runs and how to control it.
Note: The features covered here require Swift 6.2, shipping with Xcode 26+ and the Swift 6.2 compiler. Reference WWDC25 session What’s New in Swift (#245) for the canonical explanation.
Contents
- The Problem
- The New Default: Caller-Isolation Inheritance
- Introducing @concurrent
- How This Works Under the Hood
- Advanced Usage and Edge Cases
- Performance Considerations
- When to Use (and When Not To)
- Summary
The Problem
Consider a service layer you have shipped in production since Swift 5.5. It fetches a catalog of Pixar movies from a remote API:
// Swift 6.0/6.1 behavior — runs on the cooperative thread pool
func fetchMovies() async throws -> [Movie] {
let url = URL(string: "https://api.example.com/pixar/movies")!
let (data, response) = try await URLSession.shared.data(from: url)
guard let http = response as? HTTPURLResponse,
http.statusCode == 200 else {
throw MovieError.badResponse
}
return try JSONDecoder().decode([Movie].self, from: data)
}
In Swift 6.0 and 6.1, calling await fetchMovies() from a SwiftUI view’s .task modifier dispatches the function body
onto the cooperative thread pool. The main thread stays free. That is the behavior you relied on — and it worked.
Now upgrade to Swift 6.2 with the ApproachableConcurrency upcoming feature flag enabled. The same function, called
from the same @MainActor-isolated view, now inherits the caller’s isolation. It runs on the main actor. Your JSON
decoding — potentially parsing thousands of Pixar movie records — blocks the UI.
Nothing crashed. No compiler error fired. The behavior silently changed, and your scroll performance regressed.
This is the fundamental tension Swift 6.2 introduces: safety by default (no accidental data races from thread hops) at the cost of performance by default (CPU work stays on the calling actor unless you say otherwise).
The New Default: Caller-Isolation Inheritance
What Changed
In Swift 6.0/6.1, a non-isolated async function ran on the global cooperative executor — essentially a background
thread pool. The compiler treated async as an implicit signal to leave the caller’s isolation domain.
Swift 6.2 reverses this. A non-isolated async function now inherits the caller’s actor isolation at runtime. The
technical term in the proposal is nonisolated(nonsending) — the function is not isolated to any specific actor,
but it does not send its execution to another context either. It stays wherever it was called from.
// Under Swift 6.2, this function inherits the caller's isolation
func processRenderFrame(_ frame: RenderFrame) async -> ProcessedFrame {
// If called from @MainActor code, this runs on the main thread
// If called from a detached task, this runs on the cooperative pool
let normalized = frame.pixels.map { $0.normalized() }
let compressed = await compress(normalized)
return ProcessedFrame(data: compressed)
}
If a @MainActor view model calls processRenderFrame, the pixel normalization — a CPU-intensive loop — now runs on
the main thread. In Swift 6.0, it would have hopped to the pool automatically.
Why Apple Made This Change
The motivation is documented in SE-0461 and the WWDC25 session #245. The core arguments are:
-
Fewer implicit context switches. Every hop between actors requires a suspension point and a potential priority inversion. Most
asyncfunctions do not need a dedicated thread — they simply call anotherasyncAPI andawaitthe result. -
Eliminates an entire category of data races. When a function inherits the caller’s isolation, it cannot accidentally access the caller’s mutable state from a different thread. The compiler can verify safety without requiring
Sendablechecks on every closure and return value. -
Matches developer intuition. When you call a function, you expect it to “run here” unless you explicitly send it elsewhere. The old model — where
asyncsecretly meant “run over there” — surprised developers and led to bugs that were hard to diagnose.
The trade-off is explicit: if you want background execution, you now have to ask for it.
Introducing @concurrent
The @concurrent attribute is Swift 6.2’s opt-in mechanism for cooperative-pool execution. It tells the compiler: “This
function should always run on the global concurrent executor, regardless of who calls it.”
@concurrent
func decodeMovieCatalog(from data: Data) async throws -> [Movie] {
// Always runs on the cooperative thread pool
let decoder = JSONDecoder()
decoder.dateDecodingStrategy = .iso8601
return try decoder.decode([Movie].self, from: data)
}
When a @MainActor view calls await decodeMovieCatalog(from: data), the runtime hops to the thread pool before
executing the function body. This is the old Swift 6.0 behavior — but now you are requesting it explicitly.
The Sendable Consequence
There is an important constraint: because @concurrent functions run on a different isolation domain than the caller,
every argument and return value must be Sendable. The
compiler enforces this at the call site.
@concurrent
func generateThumbnails(for assets: [RenderAsset]) async -> [Thumbnail] {
// RenderAsset and Thumbnail must conform to Sendable
assets.map { asset in
Thumbnail(image: asset.downsample(to: .thumbnail))
}
}
If RenderAsset is a class with mutable state and no Sendable conformance, this will not compile. This is exactly the
kind of safety check Swift 6 was designed for — but now it only triggers when you opt into cross-isolation execution
rather than being the default for every async function.
Applying @concurrent to Closures
The attribute also works on closure parameters, which is critical for APIs that accept async work:
func withBackgroundExecution<T: Sendable>(
_ operation: @concurrent () async throws -> T
) async rethrows -> T {
try await operation()
}
// Usage in a view model
let movies = await withBackgroundExecution {
try await decodeMovieCatalog(from: data)
}
This pattern gives you fine-grained control: the outer function inherits the caller’s isolation, but the closure body runs on the pool.
How This Works Under the Hood
The Executor Model
Swift’s concurrency runtime uses executors to determine where code runs. Every actor has a serial executor. The main actor’s executor is the main dispatch queue. The cooperative pool is the default executor for detached work.
In Swift 6.0, a non-isolated async function was assigned to the cooperative pool’s executor. In Swift 6.2, a
non-isolated async function receives the caller’s executor at the point of the await call. The runtime passes this
executor implicitly — no new syntax required at the call site.
actor RenderFarm {
private var frames: [RenderFrame] = []
func processAll() async -> [ProcessedFrame] {
// renderFrame inherits RenderFarm's serial executor
var results: [ProcessedFrame] = []
for frame in frames {
let processed = await renderFrame(frame)
results.append(processed)
}
return results
}
}
// This function runs on whatever executor called it
func renderFrame(_ frame: RenderFrame) async -> ProcessedFrame {
// Under Swift 6.2: inherits RenderFarm's executor
// Under Swift 6.0: would run on the cooperative pool
ProcessedFrame(data: frame.pixels.map { $0.applyLighting() })
}
The renderFrame function runs serially on RenderFarm’s executor. This means it cannot run concurrently with other
methods on that actor, which is safe but may limit throughput if each frame is independent.
nonisolated(nonsending) vs. nonisolated
Swift 6.2 introduces a distinction that did not previously exist:
nonisolated(explicit keyword) — The function is not isolated to any actor and runs on the cooperative pool. This is the Swift 6.0 behavior and is equivalent to marking it@concurrent.nonisolated(nonsending)— The function is not isolated to any actor but inherits the caller’s execution context. This is the new default for unmarkedasyncfunctions.
// Explicit nonisolated: runs on the cooperative pool (like @concurrent)
nonisolated func oldBehavior() async -> String {
"I always run on the pool"
}
// Default in Swift 6.2: inherits caller's context
func newDefault() async -> String {
"I run wherever my caller runs"
}
// Explicit @concurrent: runs on the pool, with Sendable checks
@concurrent
func explicitPool() async -> String {
"I always run on the pool, and my inputs/outputs must be Sendable"
}
Warning: If you have existing code that uses the explicit
nonisolatedkeyword onasyncfunctions, its behavior does not change in Swift 6.2. The new default only applies to functions with no isolation annotation at all. Be precise about which functions you have annotated and which you left unmarked.
Advanced Usage and Edge Cases
Protocol Conformances
When a protocol requires an async method, the conforming type’s implementation follows the same rules:
protocol AssetProcessor {
func process(_ asset: RenderAsset) async -> ProcessedAsset
}
// This conformance inherits caller isolation by default
struct PixarAssetPipeline: AssetProcessor {
func process(_ asset: RenderAsset) async -> ProcessedAsset {
// Runs on the caller's executor in Swift 6.2
let optimized = asset.optimize()
return ProcessedAsset(data: optimized)
}
}
If you want the protocol to guarantee background execution, mark the requirement @concurrent:
protocol HeavyAssetProcessor {
@concurrent
func process(_ asset: RenderAsset) async -> ProcessedAsset
}
All conforming types must now also mark their implementation @concurrent, and all arguments and return values must be
Sendable. This is a protocol-level decision that affects every adopter.
Mixing Isolation in Task Groups
Task groups create concurrent child tasks. Each child task in a withTaskGroup block runs on the cooperative pool by
default — this has not changed. But the code around the task group inherits the caller’s isolation:
@MainActor
func loadMoviePosters(for movies: [Movie]) async -> [MoviePoster] {
// This outer body runs on the main actor
await withTaskGroup(of: MoviePoster.self) { group in
for movie in movies {
group.addTask {
// Each child task runs on the cooperative pool
// because addTask creates a new, non-isolated context
await downloadPoster(for: movie)
}
}
var posters: [MoviePoster] = []
for await poster in group {
posters.append(poster) // Back on the main actor
}
return posters
}
}
Tip:
addTaskclosures are implicitly@Sendableand@concurrent. They always run on the cooperative pool. This is one of the most reliable ways to offload CPU work without explicitly annotating your functions.
Async let Bindings
async let follows the same pattern as task group children — the bound expression runs concurrently on the pool:
@MainActor
func loadMovieDetails(id: String) async throws -> MovieDetails {
// These two fetches run concurrently on the pool
async let metadata = fetchMetadata(for: id)
async let reviews = fetchReviews(for: id)
// Awaiting the results resumes on the main actor
return try await MovieDetails(
metadata: metadata,
reviews: reviews
)
}
This means async let and TaskGroup.addTask are already your escape hatches for parallelism. @concurrent is for the
cases where a standalone function should always run off the caller’s actor, regardless of how it is called.
Performance Considerations
The behavioral change has real performance implications. Here is the mental model:
Before (Swift 6.0): Every async call was a potential context switch. The runtime hopped to the pool, executed, and
hopped back. For a chain of ten async calls from a @MainActor context, that was up to twenty context switches.
After (Swift 6.2): A chain of ten non-annotated async calls from a @MainActor context runs entirely on the main
thread. Zero context switches. Faster for lightweight work. Dangerous for heavy work.
When the Default Hurts
CPU-bound work that exceeds a few milliseconds will block the caller’s actor. Common offenders include:
- JSON/Protobuf decoding of large payloads
- Image processing (resizing, filtering, color space conversion)
- Sorting or filtering large collections
- Cryptographic operations
Profile with Instruments using the Swift Concurrency
template. Look for main-thread hangs correlated with async calls that previously ran on the pool.
When the Default Helps
The majority of async functions in a typical app are thin wrappers: they call another async API (like
URLSession.data(from:)) and return the result. These functions do almost no CPU work themselves. For these cases,
staying on the caller’s executor eliminates unnecessary hops and improves latency.
// This function does negligible CPU work — the default is ideal
func fetchDirector(for movieID: String) async throws -> Director {
let url = URL(string: "https://api.example.com/pixar/directors/\(movieID)")!
let (data, _) = try await URLSession.shared.data(from: url)
return try JSONDecoder().decode(Director.self, from: data)
}
Apple Docs:
TaskGroup— Swift Standard Library
When to Use (and When Not To)
| Scenario | Recommendation |
|---|---|
| Thin async wrappers | Use the default. No annotation needed. |
| CPU-intensive computation (>2ms) | Mark @concurrent to offload. |
| Protocol methods that must run off-actor | Mark @concurrent. |
Existing nonisolated functions | Leave them. Behavior unchanged. |
| Task group child tasks | No annotation — already on the pool. |
async let bindings | No annotation — already concurrent. |
| Mixed isolated/non-isolated callers | Prefer the default. It adapts. |
| Library public API with CPU work | Add @concurrent explicitly. |
The Library Author’s Dilemma
If you maintain a Swift package, the new default changes your API contract. A function that used to run on the pool now inherits the caller’s context. If your function does meaningful CPU work, callers who upgrade to Swift 6.2 may see regressions without any code change on their side.
The conservative approach for library authors: audit every public async function and add @concurrent to any that
perform more than trivial computation. This preserves the Swift 6.0 behavior and makes the threading contract explicit
in your API surface.
// Public API — make the threading contract explicit
public struct MovieRenderer {
@concurrent
public func render(scene: SceneGraph) async -> RenderedFrame {
// Consumers of this library expect background execution
let rasterized = rasterize(scene)
return RenderedFrame(pixels: rasterized)
}
}
Summary
- In Swift 6.2, non-annotated
asyncfunctions inherit the caller’s actor isolation at runtime instead of hopping to the cooperative thread pool. This is the single-threaded default. - The
@concurrentattribute explicitly opts a function into cooperative-pool execution, restoring the Swift 6.0 behavior. All arguments and return values must beSendable. async letandTaskGroup.addTaskclosures are unaffected — they already run on the pool.- The new default eliminates unnecessary context switches for thin async wrappers but can block the caller’s actor when applied to CPU-intensive work. Profile with Instruments.
- Library authors should audit public
asyncAPIs and add@concurrentwhere non-trivial computation occurs to preserve a clear threading contract.
The execution model shift is the most consequential runtime change in Swift 6.2. If you want to see how it fits into the
broader set of concurrency ergonomics improvements — default main actor isolation, nonisolated(nonsending) syntax, and
migration strategy — read Swift 6.2 Approachable Concurrency. For
protocol-level isolation changes that complement @concurrent, see
Isolated Conformances.