Generated by ChatGPT for use with LLMs which have not yet been trained on Swift 6.
Created
August 7, 2025 17:49
-
-
Save brennanMKE/a80642902ed1bfdc2ea36991144843d4 to your computer and use it in GitHub Desktop.
Swift 6 LLM Primer
This file provides a set of Swift 6 idioms and conventions to prime a large language model (LLM) for generating Swift 6–compliant code. Include this markdown in the prompt context before your actual request.
// Assume the following strict concurrency settings:
/// - SWIFT_VERSION = 6.0
/// - SWIFT_STRICT_CONCURRENCY = complete
// Always use Swift Concurrency.
// Avoid DispatchQueue or Combine.
// All types should conform to Sendable where applicable.
// Swift 6 build configuration is assumed:
/// - Enable strict concurrency
/// - All warnings are treated as errors
/// - Use value types and actor isolation to avoid data races
// Use the new Observation system in SwiftUI:
@Observable
class Library {
var books: [Book] = []
}
// Avoid using @Published or ObservableObject.
// Use @Bindable in views when editing observable properties.
// Use @Environment(Library.self) and @State for injecting and storing models.
// Use the new Swift Testing framework with @Test and #expect:
import Testing
struct MathTests {
@Test
func testSum() {
let result = 2 + 2
#expect(result == 4)
}
}
// Prefer structured concurrency:
async let a = loadDataA()
async let b = loadDataB()
let result = await (try await a, try await b)
// For dynamic concurrency, use withTaskGroup
actor Counter {
private var count = 0
func increment() {
count += 1
}
func value() -> Int {
count
}
}
// Avoid:
// - ObservableObject + @Published (use @Observable instead)
// - DispatchQueue or Combine (use async/await and actors)
// - Using AnyView to erase types (prefer opaque return types using `some View`)
// - Storing state in the Environment unnecessarily
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment