Image Playground SDK: Embedding On-Device Generative Images in Your App
Your designer just asked for a feature that lets users generate custom sticker images inside your app. You start researching cloud-based image generation APIs and quickly hit the usual wall: per-request costs, privacy reviews, content moderation pipelines, and latency that makes the feature feel sluggish. Apple’s Image Playground SDK sidesteps all of that by running a generative image model entirely on-device.
This post covers how to embed Image Playground in SwiftUI and UIKit apps, control generation with concepts and source images, handle the asynchronous lifecycle, and work around the SDK’s current limitations. We will not cover Core ML custom model integration or the Foundation Models framework for text generation — those live in their own dedicated posts.
Contents
- The Problem
- Image Playground at a Glance
- Presenting the Image Playground Sheet in SwiftUI
- Supplying Concepts to Guide Generation
- Using Source Images for Style Transfer
- UIKit Integration with ImagePlaygroundViewController
- Advanced Usage
- Performance Considerations
- When to Use (and When Not To)
- Summary
The Problem
Imagine you are building a Pixar-themed party invitation app. Users pick a character, write a greeting, and the app generates a custom illustration to go along with it. The naive approach is to call a cloud image generation endpoint:
struct InvitationGenerator {
let apiKey: String
let session: URLSession
func generateImage(prompt: String) async throws -> UIImage {
var request = URLRequest(url: URL(string: "https://api.example.com/v1/generate")!)
request.setValue("Bearer \(apiKey)", forHTTPHeaderField: "Authorization")
request.httpMethod = "POST"
request.httpBody = try JSONEncoder().encode(["prompt": prompt])
let (data, response) = try await session.data(for: request)
guard let httpResponse = response as? HTTPURLResponse,
httpResponse.statusCode == 200 else {
throw GenerationError.serverError
}
guard let image = UIImage(data: data) else {
throw GenerationError.invalidImageData
}
return image
}
}
This works, but it introduces real costs: API fees scale with usage, user prompts leave the device (which means a privacy review and likely a disclosure in your App Store privacy nutrition label), you need a content moderation layer to catch inappropriate outputs, and response times depend on network conditions. For a feature meant to feel instant and fun, that is a lot of overhead.
Image Playground eliminates every one of those concerns. The model runs on the Apple Neural Engine, generation happens entirely on-device, and Apple handles content safety at the system level.
Image Playground at a Glance
Image Playground shipped with iOS 18.2 as part of Apple Intelligence. The SDK lives in the
ImagePlayground framework and exposes two primary
integration surfaces:
- SwiftUI: The
.imagePlaygroundSheet()view modifier presents the system generation UI as a sheet. - UIKit:
ImagePlaygroundViewControllergives you the same experience as a view controller you present modally.
Both paths give users the full Image Playground interface — style selection (Animation, Illustration, Sketch), text prompt entry, and iterative refinement. Your app supplies optional concepts (text descriptions and imagery hints) and receives the final image when the user taps Done.
Note: Image Playground requires Apple Intelligence to be enabled on the device. It is available on iPhone 15 Pro and later, iPads and Macs with M1 or later. Always check availability before presenting the UI.
Presenting the Image Playground Sheet in SwiftUI
The fastest integration path is a single view modifier. Here is a minimal example for our party invitation app:
import SwiftUI
import ImagePlayground
struct InvitationEditorView: View {
@State private var showPlayground = false
@State private var generatedImage: URL?
var body: some View {
VStack(spacing: 20) {
if let generatedImage,
let image = UIImage(contentsOfFile: generatedImage.path) {
Image(uiImage: image)
.resizable()
.scaledToFit()
.frame(maxHeight: 300)
.clipShape(RoundedRectangle(cornerRadius: 16))
} else {
ContentUnavailableView(
"No Image Yet",
systemImage: "wand.and.stars",
description: Text("Tap the button to generate a party invitation image.")
)
}
Button("Create Invitation Art") {
showPlayground = true
}
.buttonStyle(.borderedProminent)
}
.imagePlaygroundSheet(isPresented: $showPlayground) { url in
generatedImage = url
}
}
}
The .imagePlaygroundSheet(isPresented:onCompletion:) modifier handles the entire presentation lifecycle. When the user
finishes creating an image and taps Done, the framework writes the result to a temporary file and hands your closure a
URL pointing to it. If the user cancels, the closure is never called.
Tip: The URL points to a temporary location managed by the system. If you need the image to persist, copy it to your app’s documents directory or save it to the photo library immediately in the completion handler.
Supplying Concepts to Guide Generation
Letting users start from a blank canvas is fine, but you can pre-seed the generation with concepts to steer the
output toward your app’s theme. Concepts are instances of
ImagePlaygroundConcept and come in
two flavors: text descriptions and extracted subjects from existing images.
struct PixarInvitationView: View {
@State private var showPlayground = false
@State private var generatedImage: URL?
private var concepts: [ImagePlaygroundConcept] {
[
.text("A cowboy doll and a space ranger standing on a birthday cake"),
.text("Colorful balloons and confetti in a child's bedroom"),
.text("Toy story adventure party celebration")
]
}
var body: some View {
VStack {
Button("Generate Party Image") {
showPlayground = true
}
}
.imagePlaygroundSheet(
isPresented: $showPlayground,
concepts: concepts
) { url in
generatedImage = url
}
}
}
The concepts parameter accepts an array of ImagePlaygroundConcept values. The model uses them as creative guidance
rather than strict instructions — think of them as weighted suggestions. Users can still modify the prompt and style in
the playground UI, so concepts set the starting direction rather than locking the output.
How Many Concepts Should You Provide?
Apple does not document a hard limit, but in practice, two to four text concepts produce the most coherent results. Overloading the array with conflicting descriptions (for example, “underwater scene” alongside “outer space adventure”) tends to produce muddled outputs. Keep concepts thematically aligned.
Using Source Images for Style Transfer
Beyond text concepts, you can supply a source image that the model uses as a visual reference. This is powerful for features where users want to generate stylized versions of their own photos — say, turning a selfie into an animated party avatar.
struct AvatarGeneratorView: View {
@State private var showPlayground = false
@State private var generatedAvatar: URL?
let userPhoto: Image // Assume this comes from PhotosPicker
private var concepts: [ImagePlaygroundConcept] {
[
.text("Animated character in Pixar style at a birthday party")
]
}
var body: some View {
Button("Create My Party Avatar") {
showPlayground = true
}
.imagePlaygroundSheet(
isPresented: $showPlayground,
concepts: concepts,
sourceImage: userPhoto
) { url in
generatedAvatar = url
}
}
}
The sourceImage parameter accepts a SwiftUI Image. The playground extracts the subject from the photo and
incorporates it into the generated output. This works best with clear, well-lit subjects — blurry backgrounds or extreme
angles reduce the quality of subject extraction.
Warning: Not all source images produce usable results. The system may silently ignore the source image if it cannot extract a meaningful subject. Always design your UI to handle the case where the generated image does not visually reference the source.
UIKit Integration with ImagePlaygroundViewController
If your codebase is UIKit-based or you need more control over the presentation, use
ImagePlaygroundViewController
directly. The view controller follows the standard delegate pattern:
import UIKit
import ImagePlayground
final class InvitationViewController: UIViewController {
private var generatedImageURL: URL?
@objc private func presentPlayground() {
let playground = ImagePlaygroundViewController()
playground.delegate = self
// Add concepts programmatically
playground.concepts = [
.text("A toy cowboy hosting a wild west birthday party"),
.text("Warm sunset colors, festive decorations")
]
present(playground, animated: true)
}
}
extension InvitationViewController: ImagePlaygroundViewController.Delegate {
func imagePlaygroundViewController(
_ controller: ImagePlaygroundViewController,
didCreateImageAt url: URL
) {
generatedImageURL = url
dismiss(animated: true) { [weak self] in
self?.updateUI(with: url)
}
}
func imagePlaygroundViewControllerDidCancel(
_ controller: ImagePlaygroundViewController
) {
dismiss(animated: true)
}
}
The delegate provides two callbacks: one for successful image creation and one for cancellation. Unlike the SwiftUI modifier, you are responsible for dismissing the view controller yourself.
Setting a Source Image in UIKit
To provide a source image with the UIKit API, set the sourceImage property on the view controller before presentation:
let playground = ImagePlaygroundViewController()
playground.delegate = self
playground.sourceImage = userSelectedUIImage // UIImage instance
playground.concepts = [
.text("Animated space ranger character")
]
present(playground, animated: true)
Note that the UIKit API accepts a UIImage for the source, whereas the SwiftUI modifier expects a SwiftUI Image. Plan
your conversion accordingly if you are bridging between the two frameworks.
Advanced Usage
Checking Availability at Runtime
Image Playground is not available on every device or in every region. Before showing any UI that depends on it, check availability:
import ImagePlayground
func isImagePlaygroundAvailable() -> Bool {
ImagePlaygroundViewController.isAvailable
}
The isAvailable static property returns false on unsupported hardware, when Apple Intelligence is disabled, or when
the feature is not available in the user’s region. Use this to conditionally show or hide your generation UI rather than
presenting the sheet and having it fail.
struct ConditionalPlaygroundButton: View {
@State private var showPlayground = false
var body: some View {
Group {
if ImagePlaygroundViewController.isAvailable {
Button("Generate Artwork") {
showPlayground = true
}
.imagePlaygroundSheet(isPresented: $showPlayground) { url in
// Handle generated image
}
} else {
Text("Image generation requires Apple Intelligence on iPhone 15 Pro or later.")
.foregroundStyle(.secondary)
.font(.footnote)
}
}
}
}
Tip: Wrap the availability check in a view model or environment value so you can test both paths easily in previews and unit tests without needing physical hardware.
Handling the Temporary File Lifecycle
The URL returned by Image Playground points to a temporary file in the system’s cache directory. It is not guaranteed to persist across app launches. If your app needs the image long-term, copy it immediately:
func persistGeneratedImage(from temporaryURL: URL) throws -> URL {
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let fileName = "invitation-\(UUID().uuidString).png"
let destinationURL = documentsURL.appendingPathComponent(fileName)
try FileManager.default.copyItem(at: temporaryURL, to: destinationURL)
return destinationURL
}
Combining with PhotosPicker
A common pattern is to let users pick a photo, then use it as a source image for stylization. Here is how these two APIs compose together:
import SwiftUI
import PhotosUI
import ImagePlayground
struct StylizedPhotoView: View {
@State private var selectedItem: PhotosPickerItem?
@State private var sourceImage: Image?
@State private var showPlayground = false
@State private var resultURL: URL?
var body: some View {
VStack(spacing: 16) {
PhotosPicker("Choose a Photo", selection: $selectedItem, matching: .images)
if sourceImage != nil {
Button("Stylize This Photo") {
showPlayground = true
}
.buttonStyle(.borderedProminent)
}
}
.onChange(of: selectedItem) { _, newItem in
Task {
if let data = try? await newItem?.loadTransferable(type: Data.self),
let uiImage = UIImage(data: data) {
sourceImage = Image(uiImage: uiImage)
}
}
}
.imagePlaygroundSheet(
isPresented: $showPlayground,
concepts: [.text("Animated character in a Pixar movie poster style")],
sourceImage: sourceImage
) { url in
resultURL = url
}
}
}
Performance Considerations
Image generation runs on the Apple Neural Engine and takes roughly 5 to 15 seconds depending on the device, the selected style, and thermal state. Here are the practical implications:
Memory footprint. The generative model is loaded into memory when the playground UI is presented and released when it is dismissed. On devices near their memory limit, presenting Image Playground can trigger memory warnings. Avoid presenting it alongside other memory-intensive operations (like loading a full-resolution photo library grid).
Thermal throttling. Repeated generations in quick succession can cause thermal throttling, which progressively slows each subsequent generation. If your app encourages iterative refinement — “try another style” — consider adding a brief cool-down hint in your UI after several consecutive generations.
No background generation. The SDK does not expose a headless generation API. All generation happens through the system-provided UI. You cannot batch-generate images or run generation in the background. This is a deliberate design choice by Apple to keep the user in control of what gets generated.
Apple Docs:
ImagePlayground— ImagePlayground Framework
For profiling the impact on your app’s memory and thermal state, use the
Memory Graph Debugger and the
Thermal State API (ProcessInfo.processInfo.thermalState) to monitor conditions before and after presenting the
playground.
When to Use (and When Not To)
| Scenario | Recommendation |
|---|---|
| Creative user-facing features (stickers, avatars, illustrations) | Use Image Playground. The on-device model and system UI handle safety and style selection. |
| Automated image pipelines without user interaction | Avoid. There is no headless API — the user must interact with the playground sheet. |
| Precise, brand-specific image output | Avoid. You cannot control the model’s style beyond the three system presets (Animation, Illustration, Sketch). |
| Apps targeting older hardware (pre-iPhone 15 Pro) | Avoid or provide a fallback. The feature requires Apple Intelligence-capable hardware. |
| Privacy-sensitive contexts (medical, financial) | Prefer Image Playground over cloud APIs. All data stays on-device with no network calls. |
| High-volume generation (dozens of images per session) | Use cautiously. Thermal throttling and memory pressure make rapid sequential generation impractical. |
Image Playground is best understood as a user-driven creative tool, not a programmatic image pipeline. If you need full control over the generation process — custom models, batch processing, or headless operation — look at Core ML integration or a cloud-based service instead.
Summary
- Image Playground runs Apple’s generative image model entirely on-device, eliminating API costs, latency, and privacy concerns.
- In SwiftUI, a single
.imagePlaygroundSheet()modifier handles the entire presentation and result lifecycle. - Supply
ImagePlaygroundConceptvalues (text descriptions) and optional source images to guide generation toward your app’s theme. - UIKit integration uses
ImagePlaygroundViewControllerwith a standard delegate pattern. - Always check
ImagePlaygroundViewController.isAvailablebefore presenting generation UI — the feature requires Apple Intelligence on supported hardware. - Copy the returned temporary URL to persistent storage immediately if you need the image beyond the current session.
Image Playground is one piece of the broader Apple Intelligence story. If you want to generate text on-device instead of images, check out Apple’s Foundation Models Framework for a deep dive into on-device LLM inference.