UI Automation Testing in Xcode 26: Record, Replay, and Review
You have shipped a SwiftUI app with solid unit tests and thorough Swift Testing coverage, yet somehow a regression slips through: a button that no longer navigates anywhere after a refactor. The logic is correct in isolation, but the view hierarchy broke the connection. UI tests catch exactly this category of bug, and Xcode 26 makes writing them radically easier.
This post covers the three pillars of Xcode 26’s UI automation workflow: Recording interactions into Swift test code, Replaying across multiple device configurations in a single run, and Reviewing failures with the new Automation Explorer. We will not cover unit testing fundamentals or the Swift Testing framework itself — those are prerequisites covered in their own posts.
Contents
- The Problem
- Recording UI Tests in Xcode 26
- Multi-Configuration Replay
- Automation Explorer: Reviewing Failures
- Advanced Usage
- Performance Considerations
- When to Use (and When Not To)
- Summary
The Problem
Traditional UI testing with XCTest required manually writing every query, tap, and assertion. Even a simple navigation flow produced verbose, brittle code that broke whenever accessibility identifiers shifted. Consider a typical screen in a Pixar movie catalog app:
// The old XCTest approach -- verbose and fragile
func testNavigateToMovieDetail() throws {
let app = XCUIApplication()
app.launch()
// Scroll to find the cell -- hope the identifier hasn't changed
let movieCell = app.cells.matching(
identifier: "movieCell_toyStory"
).firstMatch
XCTAssertTrue(movieCell.waitForExistence(timeout: 5))
movieCell.tap()
// Assert the detail screen appeared
let titleLabel = app.staticTexts["Toy Story"]
XCTAssertTrue(titleLabel.exists)
let directorLabel = app.staticTexts["John Lasseter"]
XCTAssertTrue(directorLabel.exists)
}
This test is 15 lines of ceremony for two taps and two assertions. Multiply that across every flow in your app and you get a test suite that nobody maintains. Tests written this way are also locked to a single device configuration — if the layout breaks only in landscape on iPad or under Dynamic Type accessibility sizes, you will not know until a user reports it.
Xcode 26 addresses both problems with a record-first workflow that generates well-structured Swift Testing code and a replay system that fans out across configurations automatically.
Recording UI Tests in Xcode 26
The centerpiece of Xcode 26’s UI testing overhaul is the UI Automation Recorder. Instead of hand-crafting element queries, you interact with the Simulator while Xcode watches and transcribes your actions into Swift Testing code.
Setting Up a Recording Session
To start recording, open your UI test target and create a new test function. Place your cursor inside the function body, then click the red Record button in the debug bar (or use the menu: Debug > Record UI Actions). Xcode launches the app in the Simulator and begins capturing.
import Testing
@testable import PixarCatalog
@Suite("Movie Navigation")
struct MovieNavigationTests {
@Test("Tapping a movie navigates to its detail screen")
@MainActor
func navigateToMovieDetail() async throws {
let app = AppLauncher.launch()
// --- Recorded actions start here ---
app.collectionViews.cells["Toy Story, 1995"].tap()
#expect(app.navigationBars["Toy Story"].exists)
#expect(
app.staticTexts["Directed by John Lasseter"].exists
)
// --- Recorded actions end here ---
}
}
Notice several things the recorder does well. It uses the Swift Testing @Test macro instead of legacy XCTest method
naming. It references cells by their full accessibility label (“Toy Story, 1995”) rather than a custom identifier you
might forget to set. And it generates #expect assertions rather than XCTAssert calls.
What the Recorder Captures
The recorder translates your Simulator interactions into a structured sequence:
- Taps and long presses become
.tap()and.press(forDuration:)calls. - Swipes and scrolls become
.swipeUp(),.swipeDown(), or scroll-to-element queries. - Text input becomes
.typeText("...")on the focused element. - System interactions like alert dismissals are captured as
.buttons["Allow"].tap().
Tip: Before recording, enable Accessibility Inspector (Xcode > Open Developer Tool > Accessibility Inspector) to verify that your views expose meaningful labels. The recorder relies on the accessibility hierarchy — the richer your labels, the more readable the generated code.
Editing Generated Code
The recorder produces a first draft, not a finished test. After recording, you should refine the output:
@Test("Tapping a movie navigates to its detail screen")
@MainActor
func navigateToMovieDetail() async throws {
let app = AppLauncher.launch()
// Navigate to the detail screen
let movieCell = app.collectionViews.cells["Toy Story, 1995"]
try await movieCell.waitForExistence(
timeout: .seconds(3)
)
movieCell.tap()
// Verify the detail screen content
let navBar = app.navigationBars["Toy Story"]
#expect(navBar.exists)
#expect(
app.staticTexts["Directed by John Lasseter"].exists
)
#expect(app.staticTexts["Rating: 8.3 / 10"].exists)
}
The refinements here are small but important: we extracted the cell lookup into a named variable for clarity, added an
explicit waitForExistence to guard against animation timing, and added a third assertion to increase the test’s
specificity.
Apple Docs:
XCUIElement— XCTest
Multi-Configuration Replay
Recording a test once is useful. Replaying it across multiple device configurations in a single test run is where Xcode 26 genuinely saves hours.
Declaring Configurations
Xcode 26 introduces a Test Plan Configuration Matrix that lets you define axes of variation. In your .xctestplan
file, you can now specify multiple configurations declaratively:
// Using the @Test macro with parameterized configurations
@Test(
"Movie detail renders correctly across configurations",
.tags(.ui),
arguments: DeviceConfiguration.allCases
)
@MainActor
func movieDetailRendering(
config: DeviceConfiguration
) async throws {
let app = AppLauncher.launch(with: config)
app.collectionViews.cells["Finding Nemo, 2003"].tap()
#expect(app.navigationBars["Finding Nemo"].exists)
#expect(
app.staticTexts["Directed by Andrew Stanton"].exists
)
#expect(app.images["moviePoster_findingNemo"].exists)
}
The DeviceConfiguration enum maps to axes defined in your test plan:
enum DeviceConfiguration: String,
CaseIterable,
CustomTestStringConvertible
{
case iPhonePortrait = "iPhone 16 - Portrait"
case iPhoneLandscape = "iPhone 16 - Landscape"
case iPadPortrait = "iPad Pro 13-inch - Portrait"
case iPadSplitView = "iPad Pro 13-inch - Split View"
case dynamicTypeXL = "iPhone 16 - Accessibility XL"
case darkMode = "iPhone 16 - Dark Mode"
var testDescription: String { rawValue }
}
When you run this test, Xcode spawns Simulator instances for each configuration and executes the test in parallel. The results appear grouped by configuration in the Test Navigator.
Configuration Axes
The test plan matrix supports several axes out of the box:
| Axis | Values | Catches |
|---|---|---|
| Device | iPhone, iPad, Mac | Layout constraint breaks |
| Orientation | Portrait, Landscape | Clipped content |
| Appearance | Light, Dark | Invisible text, missing assets |
| Dynamic Type | Default through XXXL | Truncated labels |
| Locale | Any installed locale | Direction and formatting |
Tip: Start with three configurations (iPhone portrait, iPad portrait, dark mode) and expand from there. Covering every permutation generates diminishing returns and slows your CI pipeline.
Running Configurations in CI
Multi-configuration replay integrates with xcodebuild through the test plan:
xcodebuild test \
-project PixarCatalog.xcodeproj \
-scheme PixarCatalog \
-testPlan UIAutomation \
-resultBundlePath ./results/ui-tests.xcresult
The test plan file (.xctestplan) carries all configuration definitions, so your CI script stays simple. Results for
every configuration land in the single .xcresult bundle.
Automation Explorer: Reviewing Failures
When a multi-configuration test fails, tracking down the cause used to mean sifting through console logs and guessing which screen state triggered the assertion. Xcode 26’s Automation Explorer replaces that guesswork with a visual timeline.
Navigating the Automation Explorer
After a test run completes, click the failing test in the Test Navigator and select Show in Automation Explorer. The explorer presents three synchronized panes:
- Action Timeline — A step-by-step list of every recorded action (tap, swipe, type) with timestamps.
- Screenshot Strip — Simulator screenshots captured at each action, showing exactly what the user saw.
- Element Hierarchy — The accessibility tree at the selected action step, so you can inspect element frames, labels, and values.
Clicking any action in the timeline updates the screenshot and hierarchy to that exact moment. This makes it trivial to see, for example, that the “Finding Nemo” cell was off-screen when the tap occurred, or that a label was truncated under Dynamic Type XXXL.
Diffing Across Configurations
When the same test passes on iPhone but fails on iPad, select both results in the Automation Explorer and use the Compare mode. Xcode displays the screenshot strips side by side, highlighting where the UI diverged. This is particularly valuable for catching layout constraint issues that only manifest at specific size classes.
// A test that might reveal configuration-specific issues
@Test("Movie list displays all sections")
@MainActor
func movieListSections() async throws {
let app = AppLauncher.launch()
// These sections should be visible without scrolling on iPad
#expect(
app.staticTexts["Toy Story Collection"].exists
)
#expect(
app.staticTexts["Finding Nemo Collection"].exists
)
#expect(
app.staticTexts["Monsters, Inc. Collection"].exists
)
// On iPhone, the third section requires scrolling
if !app.staticTexts["Monsters, Inc. Collection"].exists {
app.swipeUp()
try await Task.sleep(for: .milliseconds(300))
#expect(
app.staticTexts["Monsters, Inc. Collection"].exists
)
}
}
Warning: The Automation Explorer stores screenshots in the
.xcresultbundle, which can grow large. On CI, archive result bundles selectively or set a retention policy to avoid filling your build storage.
Advanced Usage
Custom Accessibility Identifiers for Stable Queries
The recorder generates queries based on the accessibility hierarchy it sees at recording time. If your labels change
with data (say, a movie title fetched from an API), the generated query breaks immediately. Set explicit
accessibilityIdentifier values for elements that your tests interact with:
struct MovieCard: View {
let movie: Movie
var body: some View {
VStack {
AsyncImage(url: movie.posterURL)
Text(movie.title)
.font(.headline)
Text("Directed by \(movie.director)")
.font(.subheadline)
}
.accessibilityIdentifier("movieCard_\(movie.slug)")
.accessibilityLabel(
"\(movie.title), \(movie.releaseYear)"
)
}
}
This gives you two access paths: accessibilityIdentifier for test queries that survive data changes, and
accessibilityLabel for VoiceOver users and the recorder’s human-readable output. Both matter.
Apple Docs:
accessibilityIdentifier— SwiftUI
Handling Asynchronous Content
UI tests routinely race against network requests and animations. Xcode 26 improves the waitForExistence API with
async/await support and configurable polling:
@Test("Movie detail loads poster from network")
@MainActor
func moviePosterLoads() async throws {
let app = AppLauncher.launch()
app.collectionViews.cells["Wall-E, 2008"].tap()
// Wait for the network-loaded poster
let poster = app.images["moviePoster_wallE"]
try await poster.waitForExistence(
timeout: .seconds(10)
)
#expect(poster.exists)
// Verify the poster has non-zero dimensions
let frame = poster.frame
#expect(frame.width > 0)
#expect(frame.height > 0)
}
Tip: For tests that depend on network data, consider launching with a
launchArgumentthat activates a stub server or local JSON fixture. This keeps your UI tests deterministic without sacrificing end-to-end coverage when you want it.
Recording Parameterized Flows
You can record a flow once and then parameterize it for multiple data inputs. Record the happy path, extract the
variable parts, and use Swift Testing’s arguments: parameter:
@Test(
"Tapping any movie in the catalog opens its detail",
arguments: [
"Toy Story, 1995",
"Up, 2009",
"Coco, 2017",
"Soul, 2020"
]
)
@MainActor
func openMovieDetail(movieLabel: String) async throws {
let app = AppLauncher.launch()
let cell = app.collectionViews.cells[movieLabel]
try await cell.waitForExistence(timeout: .seconds(5))
cell.tap()
let movieName = movieLabel.components(
separatedBy: ", "
).first!
#expect(app.navigationBars[movieName].exists)
}
This pattern gives you four test cases from a single recorded interaction. Each runs independently and reports its own pass/fail status in the Test Navigator.
Performance Considerations
UI tests are inherently slower than unit tests because they launch the full app and drive the Simulator. Xcode 26 introduces several improvements, but you still need to manage execution time deliberately.
Parallel Configuration Execution
Multi-configuration replay runs configurations in parallel by default, using separate Simulator instances. On a machine with an M-series chip and sufficient RAM, you can expect:
| Configs | Sequential | Parallel (approx.) |
|---|---|---|
| 1 | Baseline | Baseline |
| 3 | ~3x baseline | ~1.3x baseline |
| 6 | ~6x baseline | ~2x baseline |
Parallelism is bounded by available Simulator slots. On CI, ensure your runners have enough RAM — each Simulator instance consumes 400-800 MB. Apple recommends at least 16 GB for multi-configuration replay.
Reducing Test Suite Duration
Three strategies keep UI test times manageable:
- Minimize app launch overhead. Use
launchArgumentsto skip onboarding, disable animations (UIView.setAnimationsEnabled(false)), and prepopulate data. - Share Simulator state. Group tests that start from the same screen into a single
@Suiteso the Simulator is not relaunched between them. - Be selective with configurations. Run the full configuration matrix nightly on CI and a reduced set (e.g., iPhone portrait only) on every pull request.
@Suite("Movie Detail Tests", .serialized)
struct MovieDetailTests {
static let app: XCUIApplication = {
let app = XCUIApplication()
app.launchArguments = [
"--uitesting",
"--disable-animations",
"--stub-network"
]
app.launch()
return app
}()
@Test("Detail screen shows director")
@MainActor
func showsDirector() async throws {
let app = Self.app
app.collectionViews
.cells["Ratatouille, 2007"].tap()
#expect(
app.staticTexts["Directed by Brad Bird"].exists
)
app.navigationBars.buttons.firstMatch.tap()
}
@Test("Detail screen shows rating")
@MainActor
func showsRating() async throws {
let app = Self.app
app.collectionViews
.cells["Inside Out, 2015"].tap()
#expect(
app.staticTexts["Rating: 8.1 / 10"].exists
)
app.navigationBars.buttons.firstMatch.tap()
}
}
Warning: Using
.serializedand a shared app instance trades test isolation for speed. If one test leaves the app in an unexpected state, subsequent tests may fail. Use this pattern only for read-only navigation tests, not for tests that mutate data.
When to Use (and When Not To)
| Scenario | Recommendation |
|---|---|
| Navigation flows | Use UI automation — these are the bugs it catches. |
| Visual regression | Use multi-configuration replay for devices and sizes. |
| Business logic | Prefer unit tests. UI tests add overhead here. |
| Pixel-perfect layout | Prefer snapshot testing with pixel-diff tooling. |
| Flaky CI tests | Fix the root cause: add waits, disable animations. |
| Accessibility | Combine with Accessibility Inspector for full coverage. |
The recording workflow lowers the cost of writing UI tests to nearly zero, but the cost of maintaining them remains
proportional to how often your UI changes. Invest in stable accessibilityIdentifier values and a disciplined
page-object pattern to keep maintenance tractable.
Summary
- Xcode 26’s UI Automation Recorder generates Swift Testing code by watching your Simulator interactions — no more hand-crafting element queries from scratch.
- Multi-configuration replay runs the same test across devices, orientations, appearances, Dynamic Type sizes, and locales in parallel, catching layout bugs that single-device testing misses.
- The Automation Explorer provides a visual timeline with screenshots and the accessibility hierarchy at each step, replacing console log archaeology with direct inspection.
- Set stable
accessibilityIdentifiervalues on interactive elements and useaccessibilityLabelfor human-readable context — both improve test resilience and VoiceOver quality. - UI tests complement, but do not replace, unit tests. Reserve them for navigation flows, cross-configuration rendering, and integration points where the view hierarchy itself is under test.
Xcode 26’s automation tools bring UI testing closer to the record-and-verify workflow that web developers have had for years. If you want to go deeper into testing patterns, explore Swift Testing Advanced for data-driven test design that pairs well with UI automation.