<!-- Comments
Link: https://llm.codes/?https%3A%2F%2Fdeveloper.apple.com%2Fdocumentation%2Ffoundationmodels
Issue 1: The tool assumes that anchors are separate links.
For the following anchors, the tool duplicates the same page in the markdown for each instance.
https://developer.apple.com/documentation/foundationmodels
#app-main (x2)
#overview
#topics
#Essentials
#Prompting
#Guided-generation
#Tool-calling
#Feedback
https://developer.apple.com/documentation/foundationmodels/adding-intelligent-app-features-with-generative-models
#app-main (x2)
#Overview
#Configure-the-sample-code-project
#see-also
#Essentials
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output
#app-main (x2)
#overview
#Handle-guardrail-errors
#Build-boundaries-on-input-and-output
#Instruct-the-model-for-added-safety
#Add-a-deny-list-of-blocked-terms
#Create-a-risk-assessment
#Write-and-maintain-adversarial-safety-tests
#Report-safety-concerns
#Monitor-safety-for-model-or-guardrail-updates
#see-also
#Essentials
https://developer.apple.com/documentation/foundationmodels/tool
#app-main (x2)
#mentions
#overview
#topics
#Invoking-a-tool
#Getting-the-tool-properties
#relationships
#inherits-from
#see-also
#Tool-calling
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase
#app-main (x2)
#topics
#Getting-the-use-cases
#Comparing-use-cases
#Default-Implementations
#relationships
#conforms-to
#see-also
#Essentials
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel
#app-main (x2)
#mentions
#overview
#topics
#Loading-the-model-with-a-use-case
#Loading-the-model-with-an-adapter
#Checking-model-availability
#Getting-the-default-model
#Instance-Properties
#relationships
#conforms-to
#see-also
#Essentials
https://developer.apple.com/documentation/foundationmodels/instructions
#app-main (x2)
#mentions
#overview
#topics
#Creating-instructions
#Default-Implementations
#relationships
#conforms-to
#see-also
#Prompting
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models
#app-main (x2)
#overview
#Check-for-availability
#Create-a-session
#Provide-a-prompt-to-the-model
#Provide-instructions-to-the-model
#Generate-a-response
#Tune-generation-options-and-optimize-performance
#see-also
#Essentials
Issue 2: Multiline code blocks do not have ``` wrappers anymore.
Search for `struct FindContacts: Tool {` or `struct GenerativeView: View {`
Issue 3: It appears that all instances of `class="inline-link"` are broken (there is a `)` at the end).
https://developer.apple.com/documentation/foundationmodels/tool)
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models)
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output)
https://developer.apple.com/documentation/foundationmodels/adding-intelligent-app-features-with-generative-models)
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel)
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase)
https://developer.apple.com/documentation/foundationmodels/languagemodelsession)
https://developer.apple.com/documentation/foundationmodels/instructions)
https://developer.apple.com/documentation/foundationmodels/prompt)
https://developer.apple.com/documentation/foundationmodels/transcript)
https://developer.apple.com/documentation/foundationmodels/generationoptions)
https://developer.apple.com/documentation/foundationmodels/generating-swift-data-structures-with-guided-generation)
https://developer.apple.com/documentation/foundationmodels/generable)
https://developer.apple.com/documentation/foundationmodels/expanding-generation-with-tool-calling)
https://developer.apple.com/documentation/foundationmodels/generate-dynamic-game-content-with-guided-generation-and-tools)
https://developer.apple.com/documentation/foundationmodels/languagemodelfeedbackattachment)
https://developer.apple.com/documentation/foundationmodels/adding-intelligent-app-features-with-generative-models#app-main)
...
-->
<!--
Downloaded via https://llm.codes by @steipete on June 14, 2025 at 04:59 PM
Source URL: https://developer.apple.com/documentation/foundationmodels
Total pages processed: 169
URLs filtered: Yes
Content de-duplicated: Yes
Availability strings filtered: Yes
Comprehensive filtering: Yes
-->
Framework
Perform tasks with the on-device model that specializes in language understanding, structured output, and tool calling.
The Foundation Models framework provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your use case. The text-based on-device model identifies patterns that allow for generating new text that’s appropriate for the request you make, and it can make decisions to call code you write to perform specialized tasks.
Generate text content based on requests you make. The on-device model excels at a diverse range of text generation tasks, like summarization, entity extraction, text understanding, refinement, dialog for games, generating creative content, and more.
Generate entire Swift data structures with guided generation. With the @Generable
macro, you can define custom data structures and the framework provides strong guarantees that the model generates instances of your type.
To expand what the on-device foundation model can do, use Tool
to create custom tools that the model can call to assist with handling your request. For example, the model can call a tool that searches a local or online database for information, or calls a service in your app.
To use the on-device language model, people need to turn on Apple Intelligence on their device. For a list of supported devices, see Apple Intelligence.
For more information about acceptable usage of the Foundation Models framework, see Acceptable use requirements for the Foundation Models framework.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
struct UseCase
A type that represents the use case for prompting.
class LanguageModelSession
An object that represents a session that interacts with a large language model.
struct Instructions
Instructions define the model’s intended behavior on prompts.
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Generating Swift data structures with guided generation
Create robust apps by describing output you want programmatically.
protocol Generable
A type that the model uses when responding to prompts.
Expanding generation with tool calling
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
protocol Tool
A tool that a model can call to gather information at runtime or perform side effects.
struct LanguageModelFeedbackAttachment
Feedback appropriate for attaching to Feedback Assistant.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Framework
Perform tasks with the on-device model that specializes in language understanding, structured output, and tool calling.
The Foundation Models framework provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your use case. The text-based on-device model identifies patterns that allow for generating new text that’s appropriate for the request you make, and it can make decisions to call code you write to perform specialized tasks.
Generate text content based on requests you make. The on-device model excels at a diverse range of text generation tasks, like summarization, entity extraction, text understanding, refinement, dialog for games, generating creative content, and more.
Generate entire Swift data structures with guided generation. With the @Generable
macro, you can define custom data structures and the framework provides strong guarantees that the model generates instances of your type.
To expand what the on-device foundation model can do, use Tool
to create custom tools that the model can call to assist with handling your request. For example, the model can call a tool that searches a local or online database for information, or calls a service in your app.
To use the on-device language model, people need to turn on Apple Intelligence on their device. For a list of supported devices, see Apple Intelligence.
For more information about acceptable usage of the Foundation Models framework, see Acceptable use requirements for the Foundation Models framework.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
struct UseCase
A type that represents the use case for prompting.
class LanguageModelSession
An object that represents a session that interacts with a large language model.
struct Instructions
Instructions define the model’s intended behavior on prompts.
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Generating Swift data structures with guided generation
Create robust apps by describing output you want programmatically.
protocol Generable
A type that the model uses when responding to prompts.
Expanding generation with tool calling
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
protocol Tool
A tool that a model can call to gather information at runtime or perform side effects.
struct LanguageModelFeedbackAttachment
Feedback appropriate for attaching to Feedback Assistant.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Framework
Perform tasks with the on-device model that specializes in language understanding, structured output, and tool calling.
The Foundation Models framework provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your use case. The text-based on-device model identifies patterns that allow for generating new text that’s appropriate for the request you make, and it can make decisions to call code you write to perform specialized tasks.
Generate text content based on requests you make. The on-device model excels at a diverse range of text generation tasks, like summarization, entity extraction, text understanding, refinement, dialog for games, generating creative content, and more.
Generate entire Swift data structures with guided generation. With the @Generable
macro, you can define custom data structures and the framework provides strong guarantees that the model generates instances of your type.
To expand what the on-device foundation model can do, use Tool
to create custom tools that the model can call to assist with handling your request. For example, the model can call a tool that searches a local or online database for information, or calls a service in your app.
To use the on-device language model, people need to turn on Apple Intelligence on their device. For a list of supported devices, see Apple Intelligence.
For more information about acceptable usage of the Foundation Models framework, see Acceptable use requirements for the Foundation Models framework.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
struct UseCase
A type that represents the use case for prompting.
class LanguageModelSession
An object that represents a session that interacts with a large language model.
struct Instructions
Instructions define the model’s intended behavior on prompts.
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Generating Swift data structures with guided generation
Create robust apps by describing output you want programmatically.
protocol Generable
A type that the model uses when responding to prompts.
Expanding generation with tool calling
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
protocol Tool
A tool that a model can call to gather information at runtime or perform side effects.
struct LanguageModelFeedbackAttachment
Feedback appropriate for attaching to Feedback Assistant.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool Beta
Protocol
A tool that a model can call to gather information at runtime or perform side effects.
protocol Tool : Sendable
Generating content and performing tasks with Foundation Models
Expanding generation with tool calling
Tool calling gives the model the ability to call your code to incorporate up-to-date information like recent events and data from your app. A tool includes a name and a description that the framework puts in the prompt to let the model decide when and how often to call your tool.
struct FindContacts: Tool { let name = "findContacts" let description = "Find a specific number of contacts"
@Generable struct Arguments { @Guide(description: "The number of contacts to get", .range(1...10)) let count: Int }
var contacts: [CNContact] = [] // Fetch a number of contacts using the arguments. let formattedContacts = contacts.map { "($0.givenName) ($0.familyName)" } return ToolOutput(GeneratedContent(properties: ["contactNames": formattedContacts])) } }
Tools must conform to Sendable
so the framework can run them concurrently. If the model needs to pass the output of one tool as the input to another, it executes back-to-back tool calls.
You control the life cycle of your tool, so you can track the state of it between calls to the model. For example, you might store a list of database records that you don’t want to reuse between tool calls.
A language model will call this method when it wants to leverage this tool.
Required
struct ToolOutput
A structure that contains the output a tool generates.
associatedtype Arguments : ConvertibleFromGeneratedContent
The arguments that this tool should accept.
var description: String
A natural language description of when and how to use the tool.
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
Required Default implementation provided.
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Sendable
SendableMetatype
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Framework
Perform tasks with the on-device model that specializes in language understanding, structured output, and tool calling.
The Foundation Models framework provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your use case. The text-based on-device model identifies patterns that allow for generating new text that’s appropriate for the request you make, and it can make decisions to call code you write to perform specialized tasks.
Generate text content based on requests you make. The on-device model excels at a diverse range of text generation tasks, like summarization, entity extraction, text understanding, refinement, dialog for games, generating creative content, and more.
Generate entire Swift data structures with guided generation. With the @Generable
macro, you can define custom data structures and the framework provides strong guarantees that the model generates instances of your type.
To expand what the on-device foundation model can do, use Tool
to create custom tools that the model can call to assist with handling your request. For example, the model can call a tool that searches a local or online database for information, or calls a service in your app.
To use the on-device language model, people need to turn on Apple Intelligence on their device. For a list of supported devices, see Apple Intelligence.
For more information about acceptable usage of the Foundation Models framework, see Acceptable use requirements for the Foundation Models framework.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
struct UseCase
A type that represents the use case for prompting.
class LanguageModelSession
An object that represents a session that interacts with a large language model.
struct Instructions
Instructions define the model’s intended behavior on prompts.
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Generating Swift data structures with guided generation
Create robust apps by describing output you want programmatically.
protocol Generable
A type that the model uses when responding to prompts.
Expanding generation with tool calling
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
protocol Tool
A tool that a model can call to gather information at runtime or perform side effects.
struct LanguageModelFeedbackAttachment
Feedback appropriate for attaching to Feedback Assistant.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Framework
Perform tasks with the on-device model that specializes in language understanding, structured output, and tool calling.
The Foundation Models framework provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your use case. The text-based on-device model identifies patterns that allow for generating new text that’s appropriate for the request you make, and it can make decisions to call code you write to perform specialized tasks.
Generate text content based on requests you make. The on-device model excels at a diverse range of text generation tasks, like summarization, entity extraction, text understanding, refinement, dialog for games, generating creative content, and more.
Generate entire Swift data structures with guided generation. With the @Generable
macro, you can define custom data structures and the framework provides strong guarantees that the model generates instances of your type.
To expand what the on-device foundation model can do, use Tool
to create custom tools that the model can call to assist with handling your request. For example, the model can call a tool that searches a local or online database for information, or calls a service in your app.
To use the on-device language model, people need to turn on Apple Intelligence on their device. For a list of supported devices, see Apple Intelligence.
For more information about acceptable usage of the Foundation Models framework, see Acceptable use requirements for the Foundation Models framework.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
struct UseCase
A type that represents the use case for prompting.
class LanguageModelSession
An object that represents a session that interacts with a large language model.
struct Instructions
Instructions define the model’s intended behavior on prompts.
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Generating Swift data structures with guided generation
Create robust apps by describing output you want programmatically.
protocol Generable
A type that the model uses when responding to prompts.
Expanding generation with tool calling
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
protocol Tool
A tool that a model can call to gather information at runtime or perform side effects.
struct LanguageModelFeedbackAttachment
Feedback appropriate for attaching to Feedback Assistant.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models
- Foundation Models
- Generating content and performing tasks with Foundation Models
Article
Enhance the experience in your app by prompting an on-device large language model.
The Foundation Models framework lets you tap into the on-device large models at the core of Apple Intelligence. You can enhance your app by using generative models to create content or perform tasks. The framework supports language understanding and generation based on models capabilities like text extraction and summarization that you can use to:
-
Generate a title, description, or tags for content
-
Generate a list of search suggestions relevant to your app
-
Transform product reviews into structured data you can visualize
-
Invoke your own tools to assist the model with performing app-specific tasks
Check model availability by creating an instance of SystemLanguageModel
with the default
property. The default
property provides the same model Apple Intelligence uses, and supports text generation.
Model availability depends on device factors like:
-
The device must support Apple Intelligence.
-
The device must have Apple Intelligence turned on in System Settings.
-
The device must have sufficient battery and not be in Game Mode.
Always verify model availability first, and plan for a fallback experience in case the model is unavailable.
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
After confirming that the model is available, create a LanguageModelSession
object to call the model. For a single-turn interaction, create a new session each time you call the model:
// Create a session with the system model. let session = LanguageModelSession()
For a multiturn interaction — where the model retains some knowledge of what it produced — reuse the same session each time you call the model.
A Prompt
is an input that the model responds to. Prompt engineering is the art of designing high-quality prompts so that the model generates a best possible response for the request you make. A prompt can be as short as “hello”, or as long as multiple paragraphs. The process of designing a prompt involves a lot of exploration to discover the best prompt, and involves optimizing prompt length and writing style.
When thinking about the prompt you want to use in your app, consider using conversational language in the form of a question or command. For example, “What’s a good month to visit Paris?” or “Generate a food truck menu.”
Write prompts that focus on a single and specific task, like “Write a profile for the dog breed Siberian Husky”. When a prompt is long and complicated, the model takes longer to respond, and may respond in unpredictable ways. If you have a complex generation task in mind, break the task down into a series of specific prompts.
You can refine your prompt by telling the model exactly how much content it should generate. A prompt like, “Write a profile for the dog breed Siberian Husky” often takes a long time to process as the model generates a full multi-paragraph essay. If you specify “using three sentences”, it speeds up processing and generates a concise summary. Use phrases like “in a single sentence” or “in a few words” to shorten the generation time and produce shorter text.
// Generate a longer response for a specific command. let simple = "Write me a story about pears."
// Quickly generate a concise response. let quick = "Write the profile for the dog breed Siberian Husky using three sentences."
Instructions
help steer the model in a way that fits the use case of your app. The model obeys prompts at a lower priority than the instructions you provide. When you provide instructions to the model, consider specifying details like:
-
What the model’s role is; for example, “You are a mentor,” or “You are a movie critic”.
-
What the model should do, like “Help the person extract calendar events,” or “Help the person by recommending search suggestions”.
-
What the style preferences are, like “Respond as briefly as possible”.
-
What the possible safety measures are, like “Respond with ‘I can’t help with that’ if you’re asked to do something dangerous”.
Use content you trust in instructions because the model follows them more closely than the prompt itself. When you initialize a session with instructions, it affects all prompts the model responds to in that session. Instructions can also include example responses to help steer the model. When you add examples to your prompt, you provide the model with a template that shows the model what a good response looks like.
To call the model with a prompt, call respond(to:options:isolation:)
on your session. The response call is asynchronous because it may take a few seconds for Foundation Models to generate the response.
let instructions = """
Suggest five related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Instead of working with raw string output from the model, the framework offers guided generation to generate a custom Swift data structure you define. For more information about guided generation, see Generating Swift data structures with guided generation.
When you make a request to the model, you can provide custom tools to help the model complete the request. If the model determines that a Tool
can assist with the request, the framework calls your Tool
to perform additional actions like retrieving content from your local database. For more information about tool calling, see Expanding generation with tool calling
To get the best results for your prompt, experiment with different generation options. GenerationOptions
affects the runtime parameters of the model, and you can customize them for every request you make.
// Customize the temperature to increase creativity. let options = GenerationOptions(temperature: 2.0)
let session = LanguageModelSession()
let prompt = "Write me a story about coffee." let response = try await session.respond( to: prompt, options: options )
When you test apps that use the framework, use Xcode Instruments to understand more about the requests you make, like the time it takes to perform a request. When you make a request, you can access the Transcript
entries that describe the actions the model takes during your LanguageModelSession
.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/adding-intelligent-app-features-with-generative-models
- Foundation Models
- Adding intelligent app features with generative models Beta
Sample Code
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Download
Xcode 26.0+Beta
To configure the sample code project, do the following in Xcode:
-
Open the sample with the latest version of Xcode.
-
Set the developer team for all targets to let Xcode automatically manage the provisioning profile. For more information, see Set the bundle ID and Assign the project to a team.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.UseCase Beta
Structure
A type that represents the use case for prompting.
struct UseCase
static let general: SystemLanguageModel.UseCase
A use case for general prompting.
static let contentTagging: SystemLanguageModel.UseCase
A use case for content tagging.
Returns a Boolean value indicating whether two values are equal.
Equatable
Sendable
SendableMetatype
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Framework
Perform tasks with the on-device model that specializes in language understanding, structured output, and tool calling.
The Foundation Models framework provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your use case. The text-based on-device model identifies patterns that allow for generating new text that’s appropriate for the request you make, and it can make decisions to call code you write to perform specialized tasks.
Generate text content based on requests you make. The on-device model excels at a diverse range of text generation tasks, like summarization, entity extraction, text understanding, refinement, dialog for games, generating creative content, and more.
Generate entire Swift data structures with guided generation. With the @Generable
macro, you can define custom data structures and the framework provides strong guarantees that the model generates instances of your type.
To expand what the on-device foundation model can do, use Tool
to create custom tools that the model can call to assist with handling your request. For example, the model can call a tool that searches a local or online database for information, or calls a service in your app.
To use the on-device language model, people need to turn on Apple Intelligence on their device. For a list of supported devices, see Apple Intelligence.
For more information about acceptable usage of the Foundation Models framework, see Acceptable use requirements for the Foundation Models framework.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
struct UseCase
A type that represents the use case for prompting.
class LanguageModelSession
An object that represents a session that interacts with a large language model.
struct Instructions
Instructions define the model’s intended behavior on prompts.
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Generating Swift data structures with guided generation
Create robust apps by describing output you want programmatically.
protocol Generable
A type that the model uses when responding to prompts.
Expanding generation with tool calling
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
protocol Tool
A tool that a model can call to gather information at runtime or perform side effects.
struct LanguageModelFeedbackAttachment
Feedback appropriate for attaching to Feedback Assistant.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- LanguageModelSession Beta
Class
An object that represents a session that interacts with a large language model.
final class LanguageModelSession
Generating content and performing tasks with Foundation Models
Generating Swift data structures with guided generation
A session is a single context that you use to generate content with, and maintains state between requests. You can reuse the existing instance or create a new one each time you call the model. When you create a session you can provide instructions that tells the model what its role is and provides guidance on how to respond.
let session = LanguageModelSession(instructions: """
You are a motivational workout coach that provides quotes to inspire
and motivate athletes.
"""
)
let prompt = "Generate a motivational quote for my next workout." let response = try await session.respond(to: prompt)
The framework records each call to the model in a Transcript
that includes all prompts and responses. If your session exceeds the available context size, it throws an LanguageModelSession.GenerationError.exceededContextWindowSize(_:)
convenience(model:guardrails:tools:instructions:)
Start a new session in blank slate state with instructions builder.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
struct Guardrails
Guardrails flag sensitive content from model input and output.
protocol Tool
A tool that a model can call to gather information at runtime or perform side effects.
struct Instructions
Instructions define the model’s intended behavior on prompts.
Start a session by rehydrating from a transcript.
struct Transcript
A transcript that documents interactions with a language model.
func prewarm()
Requests that the system eagerly load the resources required for this session into memory.
var isResponding: Bool
A Boolean value that indicates a response is being generated.
var transcript: Transcript
A full history of interactions, including user inputs and model responses.
Produces a generable object as a response to a prompt.
Produces a response to a prompt.
Produces a generated content type as a response to a prompt and schema.
func respond(to:generating:includeSchemaInPrompt:options:isolation:)
func respond(to:options:isolation:)
func respond(to:schema:includeSchemaInPrompt:options:isolation:)
struct Prompt
A prompt from a person to the model.
struct Response
A structure that stores the output of a response call.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
func streamResponse(to:options:)
Produces a response stream to a prompt.
func streamResponse(to:generating:includeSchemaInPrompt:options:)
Produces a response stream to a prompt and schema.
func streamResponse(to:schema:includeSchemaInPrompt:options:)
Produces a response stream for a type.
struct ResponseStream
A structure that stores the output of a response stream.
struct GeneratedContent
A type that represents structured, generated content.
protocol ConvertibleFromGeneratedContent
A type that can be initialized from generated content.
protocol ConvertibleToGeneratedContent
A type that can be converted to generated content.
enum GenerationError
An error that occurs while generating a response.
struct ToolCallError
An error that occurs while a system language model is calling a tool.
Copyable
Observable
Sendable
SendableMetatype
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Instructions Beta
Structure
Instructions define the model’s intended behavior on prompts.
struct Instructions
Generating content and performing tasks with Foundation Models
Improving safety from generative model output
Instructions are typically provided by you to define the role and behavior of the model. In the code below, the instructions specify that the model replies with topics rather than, for example, a recipe:
let instructions = """
Suggest related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Apple trains the model to obey instructions over any commands it receives in prompts, so don’t include untrusted content in instructions. For more on how instructions impact generation quality and safety, see Improving safety from generative model output.
init(_:)
struct InstructionsBuilder
A type that represents an instructions builder.
protocol InstructionsRepresentable
Conforming types represent instructions.
Copyable
InstructionsRepresentable
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Prompt Beta
Structure
A prompt from a person to the model.
struct Prompt
Generating content and performing tasks with Foundation Models
Prompts can contain content written by you, and outside source, or input directly from people using your app. You can initialize a Prompt
from a string literal:
let prompt = Prompt("What are miniature schnauzers known for?")
To dynamically control the prompt’s content based on your app’s state, build a Prompt
as a computed property. The code below shows if the Boolean is true
, the prompt includes a second line of text:
let responseShouldRhyme = true let prompt = Prompt { "Answer the following question from the user: (userInput)" if responseShouldRhyme { "Your response MUST rhyme!" } }
If your prompt includes input from people, consider wrapping the input in a string template with your own prompt to better steer the model’s response. For more information on handling inputs in your prompts, see Improving safety from generative model output.
init(_:)
struct PromptBuilder
A type that represents a prompt builder.
protocol PromptRepresentable
A protocol that represents a prompt.
Copyable
PromptRepresentable
Sendable
SendableMetatype
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Instructions
Instructions define the model’s intended behavior on prompts.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Transcript Beta
Structure
A transcript that documents interactions with a language model.
struct Transcript
Generating content and performing tasks with Foundation Models
init(entries: [Transcript.Entry])
Creates a transcript.
init(from: any Decoder) throws
Creates a new instance by decoding from the given decoder.
enum Entry
An entry in a transcript.
enum Segment
The types of segments that may be included in a transcript entry.
var entries: [Transcript.Entry]
A ordered list of entries, representing inputs to and outputs from the model.
struct Instructions
Instructions you provide to the model that define its behavior.
struct Prompt
A prompt from the user asking the model.
struct Response
A response from the model.
struct ResponseFormat
Specifies a response format that the model must conform its output to.
struct StructuredSegment
A segment containing structured content.
struct TextSegment
A segment containing text.
struct ToolCall
A tool call generated by the model containing the name of a tool and arguments to pass to it.
struct ToolCalls
A collection tool calls generated by the model.
struct ToolDefinition
A definition of a tool.
struct ToolOutput
A tool output provided
func encode(to: any Encoder) throws
Encodes this value into the given encoder.
Returns a Boolean value indicating whether two values are equal.
Copyable
Decodable
Encodable
Equatable
Sendable
SendableMetatype
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
Instructions define the model’s intended behavior on prompts.
A prompt from a person to the model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- GenerationOptions Beta
Structure
Options that control how the model generates its response to a prompt.
struct GenerationOptions
Generating content and performing tasks with Foundation Models
Create a GenerationOptions
structure when you want to adjust the way the model generates its response. Use this structure to perform various adjustments on how the model chooses output tokens, to specify the penalties for repeating tokens or generating longer responses.
init(sampling: GenerationOptions.SamplingMode?, temperature: Double?, maximumResponseTokens: Int?)
Creates generation options that control token sampling behavior.
var maximumResponseTokens: Int?
The maximum number of tokens the model is allowed to produce in its response.
var sampling: GenerationOptions.SamplingMode?
A sampling strategy for how the model picks tokens when generating a response.
struct SamplingMode
A type that defines how values are sampled from a probability distribution.
var temperature: Double?
Temperature influences the confidence of the models response.
Returns a Boolean value indicating whether two values are equal.
Equatable
Sendable
SendableMetatype
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Instructions
Instructions define the model’s intended behavior on prompts.
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Framework
Perform tasks with the on-device model that specializes in language understanding, structured output, and tool calling.
The Foundation Models framework provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your use case. The text-based on-device model identifies patterns that allow for generating new text that’s appropriate for the request you make, and it can make decisions to call code you write to perform specialized tasks.
Generate text content based on requests you make. The on-device model excels at a diverse range of text generation tasks, like summarization, entity extraction, text understanding, refinement, dialog for games, generating creative content, and more.
Generate entire Swift data structures with guided generation. With the @Generable
macro, you can define custom data structures and the framework provides strong guarantees that the model generates instances of your type.
To expand what the on-device foundation model can do, use Tool
to create custom tools that the model can call to assist with handling your request. For example, the model can call a tool that searches a local or online database for information, or calls a service in your app.
To use the on-device language model, people need to turn on Apple Intelligence on their device. For a list of supported devices, see Apple Intelligence.
For more information about acceptable usage of the Foundation Models framework, see Acceptable use requirements for the Foundation Models framework.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
struct UseCase
A type that represents the use case for prompting.
class LanguageModelSession
An object that represents a session that interacts with a large language model.
struct Instructions
Instructions define the model’s intended behavior on prompts.
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Generating Swift data structures with guided generation
Create robust apps by describing output you want programmatically.
protocol Generable
A type that the model uses when responding to prompts.
Expanding generation with tool calling
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
protocol Tool
A tool that a model can call to gather information at runtime or perform side effects.
struct LanguageModelFeedbackAttachment
Feedback appropriate for attaching to Feedback Assistant.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/generating-swift-data-structures-with-guided-generation
- Foundation Models
- Generating Swift data structures with guided generation
Article
Create robust apps by describing output you want programmatically.
When you perform a request, the model returns a raw string in its natural language format. Raw strings require you to manually parse the details you want. Instead of working with raw strings, the framework provides guided generation, which gives strong guarantees that the response is in a format you expect.
To use guided generation, describe the output you want as a new Swift type. When you make a request to the model, include your custom type and the framework performs the work necessary to fill in and return an object with the parameters filled in for you. The framework uses constrained sampling when generating output, which defines the rules on what the model can generate. Constrained sampling prevents the model from producing malformed output and provides you with results as a type you define.
For more information about creating a session and prompting the model, see Generating content and performing tasks with Foundation Models.
To conform your type to Generable
, describe the type and the parameters to guide the response of the model. The framework supports generating content with basic Swift types like Bool
, Int
, Float
, Double
, Decimal
, and Array
. For example, if you only want the model to return a numeric result, call respond(to:generating:includeSchemaInPrompt:options:isolation:)
using the type Float
:
let prompt = "How many tablespoons are in a cup?" let session = LanguageModelSession(model: .default)
// Generate a response with the type Float
, instead of String
.
let response = try await session.respond(to: prompt, generating: Float.self)
A schema provides the ability to control the values of a property, and you can specify guides to control values you associate with it. The framework provides two macros that help you with schema creation. Use Generable(description:)
on structures, actors, and enumerations; and only use Guide(description:)
with stored properties.
When you add descriptions to Generable
properties, you help the model understand the semantics of the properties. Keep the descriptions as short as possible — long descriptions take up additional context size and can introduce latency. The following example creates a type that describes a cat and includes a name, an age that’s constrained to a range of values, and a short profile:
@Generable(description: "Basic profile information about a cat") struct CatProfile { // A guide isn't necessary for basic fields. var name: String
@Guide(description: "The age of the cat", .range(0...20)) var age: Int
@Guide(description: "A one sentence profile about the cat's personality") var profile: String }
You can nest custom Generable
types inside other Generable
types, and mark enumerations with associated values as Generable
. The Generable
macro ensures that all associated and nested values are themselves generable. This allows for advanced use cases like creating complex data types or dynamically generating views at runtime.
After creating your type, use it along with a LanguageModelSession
to prompt the model. When you use a Generable
type it prevents the model from producing malformed output and prevents the need for any manual string parsing.
// Generate a response using a custom type. let response = try await session.respond( to: "Generate a cute rescue cat", generating: CatProfile.self )
If you don’t know what you want the model to produce at compile time use DynamicGenerationSchema
to define what you need. For example, when you’re working on a restaurant app and want to restrict the model to pick from menu options that a restaurant provides. Because each restaurant provides a different menu, the schema won’t be known in its entirety until runtime.
// Create the dynamic schema at runtime.
let menuSchema = DynamicGenerationSchema(
name: "Menu",
properties: [
DynamicGenerationSchema.Property(
name: "dailySoup",
schema: DynamicGenerationSchema(
name: "dailySoup",
anyOf: ["Tomato", "Chicken Noodle", "Clam Chowder"]
)
)
// Add additional properties.
]
)
After creating a dynamic schema, use it to create a GenerationSchema
that you provide with your request. When you try to create a generation schema, it can throw an error if there are conflicting property names, undefined references, or duplicate types.
// Create the schema. let schema = try GenerationSchema(root: menuSchema, dependencies: [])
// Pass the schema to the model to guide the output. let response = try await session.respond( to: "The prompt you want to make.", schema: schema )
The response you get is an instance of GeneratedContent
. You can decode the outputs from schemas you define at runtime by calling value(_:forProperty:)
for the property you want.
protocol Generable
A type that the model uses when responding to prompts.
Beta
- Foundation Models
- Generable Beta
Protocol
A type that the model uses when responding to prompts.
protocol Generable : ConvertibleFromGeneratedContent, ConvertibleToGeneratedContent
Generating Swift data structures with guided generation
Annotate your Swift structure or enumeration with the @Generable
macro to allow the model to respond to prompts by generating an instance of your type. Use the @Guide
macro to provide natural language descriptions of your properties, and programmatically control the values that the model can generate.
@Generable struct SearchSuggestions { @Guide(description: "A list of suggested search terms", .count(4)) var searchTerms: [SearchTerm]
@Generable struct SearchTerm { // Use a generation identifier for types the framework generates. var id: GenerationID
@Guide(description: "A 2 or 3 word search term, like 'Beautiful sunsets'") var searchTerm: String } }
macro Generable(description: String?)
Conforms a type to generable.
macro Guide(description: String)
Allows for influencing the allowed values of properties of a generable type.
macro Guide(description:_:)
struct GenerationGuide
Guides that control how values are generated.
static var generationSchema: GenerationSchema
An instance of the generation schema.
Required
struct GenerationSchema
A type that describes the properties of an object and any guides on their values.
struct GenerationID
A unique identifier that is stable for the duration of a response, but not across responses.
The partially generated type of this struct.
associatedtype PartiallyGenerated : ConvertibleFromGeneratedContent = Self
A representation of partially generated content
Required Default implementation provided.
struct DynamicGenerationSchema
The dynamic counterpart to the generation schema type that you use to construct schemas at runtime.
ConvertibleFromGeneratedContent
ConvertibleToGeneratedContent
InstructionsRepresentable
PromptRepresentable
GeneratedContent
Create robust apps by describing output you want programmatically.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Framework
Perform tasks with the on-device model that specializes in language understanding, structured output, and tool calling.
The Foundation Models framework provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your use case. The text-based on-device model identifies patterns that allow for generating new text that’s appropriate for the request you make, and it can make decisions to call code you write to perform specialized tasks.
Generate text content based on requests you make. The on-device model excels at a diverse range of text generation tasks, like summarization, entity extraction, text understanding, refinement, dialog for games, generating creative content, and more.
Generate entire Swift data structures with guided generation. With the @Generable
macro, you can define custom data structures and the framework provides strong guarantees that the model generates instances of your type.
To expand what the on-device foundation model can do, use Tool
to create custom tools that the model can call to assist with handling your request. For example, the model can call a tool that searches a local or online database for information, or calls a service in your app.
To use the on-device language model, people need to turn on Apple Intelligence on their device. For a list of supported devices, see Apple Intelligence.
For more information about acceptable usage of the Foundation Models framework, see Acceptable use requirements for the Foundation Models framework.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
struct UseCase
A type that represents the use case for prompting.
class LanguageModelSession
An object that represents a session that interacts with a large language model.
struct Instructions
Instructions define the model’s intended behavior on prompts.
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Generating Swift data structures with guided generation
Create robust apps by describing output you want programmatically.
protocol Generable
A type that the model uses when responding to prompts.
Expanding generation with tool calling
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
protocol Tool
A tool that a model can call to gather information at runtime or perform side effects.
struct LanguageModelFeedbackAttachment
Feedback appropriate for attaching to Feedback Assistant.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Expanding generation with tool calling
Article
Build tools that enable the model to perform tasks that are specific to your use case.
Tools provide a way to extend the functionality of the model for your own use cases. Tool-calling allows the model to interact with external code you create to fetch up-to-date information, ground responses in sources of truth that you provide, and perform side effects, like turning on dark mode.
You can create tools that enable the model to:
-
Query entries from your app’s database and reference them in its answer.
-
Perform actions within your app, like adjusting the difficulty in a game or making a web request to get additional information.
-
Integrate with other frameworks, like Contacts or HealthKit, that use existing privacy and security mechanisms.
When you prompt the model with a question or make a request, the model decides whether it can provide an answer or if it needs the help of a tool. When the model determines that a tool can help, it calls the tool with additional arguments that the tool can use. After the tool completes the task, it returns control and contains the arguments that the tool accepts, and a method that the model calls when it wants to use the tool. You can call call(arguments:)
concurrently with itself or with other tools. The following example shows a tool that accepts a search term and a number of recipes to retrieve:
struct BreadDatabaseTool: Tool { let name = "searchBreadDatabase" let description = "Searches a local database for bread recipes."
@Generable struct Arguments { @Guide(description: "The type of bread to search for") var searchTerm: String @Guide(description: "The number of recipes to get", .range(1...6)) var limit: Int }
struct Recipe { var name: String var description: String var link: URL }
var recipes: [Recipe] = []
// Put your code here to retrieve a list of recipes from your database.
let formattedRecipes = recipes.map { "Recipe for '($0.name)': ($0.description) Link: ($0.link)" } return ToolOutput(GeneratedContent(properties: ["recipes": formattedRecipes])) } }
When you provide descriptions to generable properties, you help the model understand the semantics of the arguments. Keep descriptions as short as possible because long descriptions take up context size and can introduce latency.
Tools use guided generation for the Arguments
property. For more information about guided generation, see Generating Swift data structures with guided generation.
When you create a session, you can provide a list of tools that are relevant to the task you want to complete. The tools you provide are available for all future interactions with the session. The following example initializes a session with a tool that the model can call when it determines that it would help satisfy the prompt:
let session = LanguageModelSession( tools: [BreadDatabaseTool()] )
let response = try await session.respond( to: "Find three sourdough bread recipes" )
Tool output can be a string, or a GeneratedContent
object. The model can call a tool multiple times in parallel to satisfy the request, like when retrieving weather details for several cities:
struct WeatherTool: Tool { let name = "getWeather" let description = "Retrieve the latest weather information for a city"
@Generable struct Arguments { @Guide(description: "The city to get weather information for") var city: String }
struct Forecast: Encodable { var city: String var temperature: Int }
var temperature = "unknown"
// Get the temperature for the city by using WeatherKit
.
let forecast = GeneratedContent(properties: [
"city": arguments.city,
"temperature": temperature,
])
return ToolOutput(forecast)
}
}
// Create a session with default instructions that guide the requests. let session = LanguageModelSession( tools: [WeatherTool()], instructions: "Help the person with getting weather information" )
// Make a request that compares the temperature between several locations. let response = try await session.respond( to: "Is it hotter in Boston, Wichita, or Pittsburgh?" )
When an error happens during tool calling, the session throws a LanguageModelSession.ToolCallError
with the underlying error and includes the tool that throws the error. This helps you understand the error that happened during the tool call, and any custom error types that your tool produces. You can throw errors from your tools to escape calls when you detect something is wrong, like when the person using your app doesn’t allow access to the required data or a network call is taking longer than expected. Alternatively, your tool can return a string ToolOutput
that briefly tells the model what didn’t work, like “Cannot access the database.”
do { let answer = try await session.respond("Find a recipe for tomato soup.") } catch let error as LanguageModelSession.ToolCallError {
// Access the name of the tool, like BreadDatabaseTool. print(error.tool.name)
// Access an underlying error that your tool throws and check if the tool // encounters a specific condition. if case .databaseIsEmpty = error.underlyingError as? SearchBreadDatabaseToolError { // Display an error in the UI. }
} catch { print("Some other error: (error)") }
A session contains an observable transcript
property that allows you to track when, and how many times, the model calls your tools. A transcript also provides the ability to construct a representation of the call graph for debugging purposes and pairs well with SwiftUI to visualize session history.
struct MyHistoryView: View {
@State var session = LanguageModelSession( tools: [BreadDatabaseTool()] )
var body: some View { List(session.transcript) { entry in switch entry { case .instructions(let instructions): // Display the instructions the model uses. case .prompt(let prompt): // Display the prompt made to the model. case .toolCall(let call): // Display the call details for a tool, like the tool name and arguments. case .toolOutput(let output): // Display the output that a tool provides
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
protocol Tool
A tool that a model can call to gather information at runtime or perform side effects.
Beta
https://developer.apple.com/documentation/foundationmodels/generate-dynamic-game-content-with-guided-generation-and-tools
- Foundation Models
- Generate dynamic game content with guided generation and tools Beta
Sample Code
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
Download
Xcode 26.0+Beta
Expanding generation with tool calling
Build tools that enable the model to perform tasks that are specific to your use case.
protocol Tool
A tool that a model can call to gather information at runtime or perform side effects.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Framework
Perform tasks with the on-device model that specializes in language understanding, structured output, and tool calling.
The Foundation Models framework provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your use case. The text-based on-device model identifies patterns that allow for generating new text that’s appropriate for the request you make, and it can make decisions to call code you write to perform specialized tasks.
Generate text content based on requests you make. The on-device model excels at a diverse range of text generation tasks, like summarization, entity extraction, text understanding, refinement, dialog for games, generating creative content, and more.
Generate entire Swift data structures with guided generation. With the @Generable
macro, you can define custom data structures and the framework provides strong guarantees that the model generates instances of your type.
To expand what the on-device foundation model can do, use Tool
to create custom tools that the model can call to assist with handling your request. For example, the model can call a tool that searches a local or online database for information, or calls a service in your app.
To use the on-device language model, people need to turn on Apple Intelligence on their device. For a list of supported devices, see Apple Intelligence.
For more information about acceptable usage of the Foundation Models framework, see Acceptable use requirements for the Foundation Models framework.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
struct UseCase
A type that represents the use case for prompting.
class LanguageModelSession
An object that represents a session that interacts with a large language model.
struct Instructions
Instructions define the model’s intended behavior on prompts.
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Generating Swift data structures with guided generation
Create robust apps by describing output you want programmatically.
protocol Generable
A type that the model uses when responding to prompts.
Expanding generation with tool calling
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
protocol Tool
A tool that a model can call to gather information at runtime or perform side effects.
struct LanguageModelFeedbackAttachment
Feedback appropriate for attaching to Feedback Assistant.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- LanguageModelFeedbackAttachment Beta
Structure
Feedback appropriate for attaching to Feedback Assistant.
struct LanguageModelFeedbackAttachment
Improving safety from generative model output
Use this type to build out user feedback experiences in your app. After collecting feedback, serialize them into a JSON or JSONL file and submit it to Apple using Feedback Assistant. This type supports simple thumbs up or down feedback experiences, all the way to experiences that ask people to compare multiple outputs and explain their preferences.
The following code shows creating a LanguageModelFeedbackAttachment
using the session transcript to find the input for the provided input. Then, it encodes the data for you to use when submitting feedback.
private func submitFeedback(entry: Transcript.Entry, positive: Bool) { // Create a feedback object with the model input, output, and a sentiment value. let feedback = LanguageModelFeedbackAttachment( input: Array(session.trancript.entries.prefix(while: { $0 != entry })), output: [entry], sentiment: positive ? .positive : .negative )
// Convert the feed
Creates feedback from a person regarding a single transcript.
enum Sentiment
A sentiment regarding the model’s response.
struct Issue
An issue with the model’s response.
Creates feedback from a person that indicates their preference among several outputs generated for the same input.
func encode(to: any Encoder) throws
Encodes this value into the given encoder.
Encodable
Sendable
SendableMetatype
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Framework
Perform tasks with the on-device model that specializes in language understanding, structured output, and tool calling.
The Foundation Models framework provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your use case. The text-based on-device model identifies patterns that allow for generating new text that’s appropriate for the request you make, and it can make decisions to call code you write to perform specialized tasks.
Generate text content based on requests you make. The on-device model excels at a diverse range of text generation tasks, like summarization, entity extraction, text understanding, refinement, dialog for games, generating creative content, and more.
Generate entire Swift data structures with guided generation. With the @Generable
macro, you can define custom data structures and the framework provides strong guarantees that the model generates instances of your type.
To expand what the on-device foundation model can do, use Tool
to create custom tools that the model can call to assist with handling your request. For example, the model can call a tool that searches a local or online database for information, or calls a service in your app.
To use the on-device language model, people need to turn on Apple Intelligence on their device. For a list of supported devices, see Apple Intelligence.
For more information about acceptable usage of the Foundation Models framework, see Acceptable use requirements for the Foundation Models framework.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
struct UseCase
A type that represents the use case for prompting.
class LanguageModelSession
An object that represents a session that interacts with a large language model.
struct Instructions
Instructions define the model’s intended behavior on prompts.
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Generating Swift data structures with guided generation
Create robust apps by describing output you want programmatically.
protocol Generable
A type that the model uses when responding to prompts.
Expanding generation with tool calling
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
protocol Tool
A tool that a model can call to gather information at runtime or perform side effects.
struct LanguageModelFeedbackAttachment
Feedback appropriate for attaching to Feedback Assistant.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models)
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output)
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/adding-intelligent-app-features-with-generative-models)
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/generating-swift-data-structures-with-guided-generation)
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/generate-dynamic-game-content-with-guided-generation-and-tools)
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/adding-intelligent-app-features-with-generative-models#app-main
- Foundation Models
- Adding intelligent app features with generative models Beta
Sample Code
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Download
Xcode 26.0+Beta
To configure the sample code project, do the following in Xcode:
-
Open the sample with the latest version of Xcode.
-
Set the developer team for all targets to let Xcode automatically manage the provisioning profile. For more information, see Set the bundle ID and Assign the project to a team.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/adding-intelligent-app-features-with-generative-models/#Overview
- Foundation Models
- Adding intelligent app features with generative models Beta
Sample Code
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Download
Xcode 26.0+Beta
To configure the sample code project, do the following in Xcode:
-
Open the sample with the latest version of Xcode.
-
Set the developer team for all targets to let Xcode automatically manage the provisioning profile. For more information, see Set the bundle ID and Assign the project to a team.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/adding-intelligent-app-features-with-generative-models/#Configure-the-sample-code-project
- Foundation Models
- Adding intelligent app features with generative models Beta
Sample Code
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Download
Xcode 26.0+Beta
To configure the sample code project, do the following in Xcode:
-
Open the sample with the latest version of Xcode.
-
Set the developer team for all targets to let Xcode automatically manage the provisioning profile. For more information, see Set the bundle ID and Assign the project to a team.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/adding-intelligent-app-features-with-generative-models/#see-also
- Foundation Models
- Adding intelligent app features with generative models Beta
Sample Code
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Download
Xcode 26.0+Beta
To configure the sample code project, do the following in Xcode:
-
Open the sample with the latest version of Xcode.
-
Set the developer team for all targets to let Xcode automatically manage the provisioning profile. For more information, see Set the bundle ID and Assign the project to a team.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/adding-intelligent-app-features-with-generative-models/#Essentials
- Foundation Models
- Adding intelligent app features with generative models Beta
Sample Code
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Download
Xcode 26.0+Beta
To configure the sample code project, do the following in Xcode:
-
Open the sample with the latest version of Xcode.
-
Set the developer team for all targets to let Xcode automatically manage the provisioning profile. For more information, see Set the bundle ID and Assign the project to a team.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/adding-intelligent-app-features-with-generative-models#app-main)
- Foundation Models
- Adding intelligent app features with generative models Beta
Sample Code
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Download
Xcode 26.0+Beta
To configure the sample code project, do the following in Xcode:
-
Open the sample with the latest version of Xcode.
-
Set the developer team for all targets to let Xcode automatically manage the provisioning profile. For more information, see Set the bundle ID and Assign the project to a team.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output#app-main
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output/#overview
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output/#Handle-guardrail-errors
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/languagemodelsession/generationerror/guardrailviolation(_:
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output/#Build-boundaries-on-input-and-output
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output/#Instruct-the-model-for-added-safety
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output/#Add-a-deny-list-of-blocked-terms
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output/#Create-a-risk-assessment
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output/#Write-and-maintain-adversarial-safety-tests
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output/#Report-safety-concerns
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output/#Monitor-safety-for-model-or-guardrail-updates
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output/#see-also
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output/#Essentials
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output#app-main)
- Foundation Models
- Improving safety from generative model output
Article
Create generative experiences that appropriately handle sensitive inputs and respect people.
Generative AI models have powerful creativity, but with this creativity comes the risk of unintended or unexpected results. For any generative AI feature, safety needs to be an essential part of your design.
The Foundation Models framework has two base layers of safety. First, the framework uses an on-device language model that’s trained to handle sensitive topics with care. Second, the framework uses guardrails that Apple developed with a responsible AI approach. These guardrails flag sensitive content, such as self-harm, violence, and adult sexual material, from model input and output. Since safety risks are often contextual, some harms may be able to bypass both built-in framework safety layers. Additional safety layers you design specific to your app are vital. When developing your feature, you’ll need to decide what is acceptable or might be harmful in your generative AI feature, based on your app’s use case, cultural context, and audience.
When you send a prompt to the model, the Guardrails
check the input prompt and the model’s output. If either fails the guardrail’s safety check, the model session throws a LanguageModelSession.GenerationError.guardrailViolation(_:)
error:
do { let session = LanguageModelSession() let topic = // A potentially harmful topic. let prompt = "Write a respectful and funny story about (topic)." let response = try await session.respond(to: prompt) } catch LanguageModelSession.GenerationError.guardrailViolation { // Handle the safety error. }
If you encounter a guardrail violation error for any built-in prompt in your app, experiment with re-phrasing the prompt to determine which phrases are activating the guardrails, and avoid those phrases. If the error is thrown in response to a prompt created by someone using your app, give people a clear message that explains the issue. For example, you might say “Sorry, this feature isn’t designed to handle that kind of input” and offer people the opportunity to try a different prompt.
Safety risks increase when a prompt includes direct input from a person using your app, or from an unverified external source, like a webpage. An untrusted source makes it difficult to anticipate what the input contains. Whether accidentally or on purpose, someone could input sensitive content that causes the model to respond poorly.
Whenever possible, avoid open input in prompts and place boundaries for controlling what the input can be. This approach helps when you want generative content to stay within the bounds of a particular topic or task. For the highest level of safety on input, give people a fixed set of prompts to choose from. This gives you the highest certainty that sensitive content won’t make its way into your app:
enum TopicOptions {
case family
case nature
case work
}
let topicChoice = TopicOptions.nature
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on (topicChoice)
"""
If your app allows people to freely input a prompt, placing boundaries on the output can also offer stronger safety guarantees. Using guided generation, create an enumeration to restrict the model’s output to a set of predefined options designed to be safe no matter what:
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt, generating: Breakfast.self)
For more information about guided generation, see Generating Swift data structures with guided generation.
Consider adding detailed session Instructions
that tell the model how to handle sensitive content. The language model prioritizes following its instructions over any prompt, so instructions are an effective tool for improving safety and overall generation quality. Use uppercase words to emphasize the importance of certain phrases for the model:
do {
let instructions = """
ALWAYS respond in a respectful way.
If someone asks you to generate content that might be sensitive,
you MUST decline with 'Sorry, I can't do that.'
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = // Open input from a person using the app.
let response = try await session.respond(to: prompt)
} catch LanguageModelSession.GenerationError.guardrailViolation {
// Handle the safety error.
}
If you want to include open-input from people, instructions for safety are recommended. For an additional layer of safety, use a format string in normal prompts that wraps people’s input in your own content that specifies how the model should respond:
let userInput = // The input a person enters in the app.
let prompt = """
Generate a wholesome and empathetic journal prompt that helps
this person reflect on their day. They said: (userInput)
"""
If you allow prompt input from people or outside sources, consider adding your own deny list of terms. A deny list is anything you don’t want people to be able to input to your app, including unsafe terms, names of people or products, or anything that’s not relevant to the feature you provide. Implement a deny list similarly to guardrails by creating a function that checks the input and the model output:
let session = LanguageModelSession() let userInput = // The input a person enters in the app. let prompt = "Generate a wholesome story about: (userInput)"
// A function you create that evaluates whether the input // contains anything in your deny list. if verifyText(prompt) { let response = try await session.respond(to: prompt)
// Compare the output to evaluate whether it contains anything in your deny list. if verifyText(response.content) { return response } else { // Handle the unsafe output. } } else { // Handle the unsafe input. }
A deny list can be a simple list of strings in your code that you distribute with your app. Alternatively, you can host a deny list on a server so your app can download the latest deny list when it’s connected to the network. Hosting your deny list allows you to update your list when you need to and avoids requiring a full app update if a safety issue arise.
Conduct a risk assessment to proactively address what might go wrong. Risk assessment is an exercise that helps you brainstorm potential safety risks in your app and map each risk to an actionable mitigation. You can write a risk assessment in any format that includes these essential elements:
-
List each AI feature in your app.
-
For each feature, list possible safety risks that could occur, even if they seem unlikely.
-
For each safety risk, score how serious the harm would be if that thing occurred, from mild to critical.
-
For each safety risk, assign a strategy for how you’ll mitigate the risk in your app.
For example, an app might include one feature with the fixed-choice input pattern for generation and one feature with the open-input pattern for generation, which is higher safety risk:
Feature | Harm | Severity | Mitigation |
---|---|---|---|
Player can input any text to chat with nonplayer characters in the coffee shop. | A character might respond in an insensitive or harmful way. | Critical | Instructions and prompting to steer characters responses to be safe; safety testing. |
Image generation of an imaginary dream customer, like a fairy or a frog. | Generated image could look weird or scary. | Mild | Include in the prompt examples of images to generate that are cute and not scary; safety testing. |
Player can make a coffee from a fixed menu of options. | None identified. | ||
Generate a review of the coffee the player made, based on the customer’s order. | Review could be insulting. | Moderate | Instructions and prompting to encourage posting a polite review; safety testing. |
Besides obvious harms, like a poor-quality model output, think about how your generative AI feature could affect people, including real world scenarios where someone might act based on information generated by your app.
Although most people will interact with your app in respectful ways, it’s important to anticipate possible failure modes where certain input or contexts could cause the model to generate something harmful. Especially if your app takes input from people, test your experience’s safety on input like:
-
Input that is nonsensical, snippets of code, or random characters.
-
Input that includes sensitive content.
-
Input that includes controversial topics.
-
Vague or unclear input that could be misinterpreted.
Create a list of potentially harmful prompt inputs that you can run as part of your app’s tests. Include every prompt in your app — even safe ones — as part of your app testing. For each prompt test, log the timestamp, full input prompt, the model’s response, and whether it activates any built-in safety or mitigations you’ve included in your app. When starting out, manually read the model’s response on all tests to ensure it meets your design and safety goals. To scale your tests, consider using a frontier LLM to auto-grade the safety of each prompt. Building a test pipeline for prompts and safety is a worthwhile investment for tracking changes in how your app responds over time.
Someone might purposefully attempt to break your feature or produce bad output — especially someone who won’t be harmed by their actions. But keep in mind that it’s very important to identify cases where someone could accidentally cause harm to self or others during normal app use.
Don’t engage in any testing that could cause you or others harm. Apple’s built-in responsible AI and safety measures, like safety guardrails, are built by experts with extensive training and support. These built-in measures aim to block egregious harms, allowing you to focus on the borderline harmful cases that need your judgement. Before conducting any safety testing, ensure that you’re in a safe location and that you have the health and well-being support you need.
Somewhere in your app, it’s important to include a way that people can report potentially harmful content. Continuously monitor the feedback you receive, and be responsive to quickly handling any safety issues that arise. If someone reports a safety concern that you believe isn’t being properly handled by Apple’s built-in guardrails, report it to Apple with Feedback Assistant.
The Foundation Models framework offers utilities for feedback. Use LanguageModelFeedbackAttachment
to retrieve language model session transcripts from people using your app. After collecting feedback, you can serialize it into a JSONL file and include it in the report you send with Feedback Assistant.
Apple releases updates to the system model as part of regular OS updates. If you participate in the developer beta program you can test your app with new model version ahead of people using your app. When the model updates, it’s important to re-run your full prompt tests in addition to your adversarial safety tests because the model’s response may change. Your risk assessment can help you track any change to safety risks in your app.
Apple may update the built-in guardrails at any time outside of the regular OS update cycle. This is done to rapidly respond, for example, to reported safety concerns that require a fast response. Include all of the prompts you use in your app in your test suite, and run tests regularly to identify when prompts start activating the guardrails.
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/languagemodelsession/generationerror/guardrailviolation(_:))
)#app-main)
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/generating-swift-data-structures-with-guided-generation).
.#app-main)
Search developer.apple.comSearch Icon
- Foundation Models
- Tool Beta
Protocol
A tool that a model can call to gather information at runtime or perform side effects.
protocol Tool : Sendable
Generating content and performing tasks with Foundation Models
Expanding generation with tool calling
Tool calling gives the model the ability to call your code to incorporate up-to-date information like recent events and data from your app. A tool includes a name and a description that the framework puts in the prompt to let the model decide when and how often to call your tool.
struct FindContacts: Tool { let name = "findContacts" let description = "Find a specific number of contacts"
@Generable struct Arguments { @Guide(description: "The number of contacts to get", .range(1...10)) let count: Int }
var contacts: [CNContact] = [] // Fetch a number of contacts using the arguments. let formattedContacts = contacts.map { "($0.givenName) ($0.familyName)" } return ToolOutput(GeneratedContent(properties: ["contactNames": formattedContacts])) } }
Tools must conform to Sendable
so the framework can run them concurrently. If the model needs to pass the output of one tool as the input to another, it executes back-to-back tool calls.
You control the life cycle of your tool, so you can track the state of it between calls to the model. For example, you might store a list of database records that you don’t want to reuse between tool calls.
A language model will call this method when it wants to leverage this tool.
Required
struct ToolOutput
A structure that contains the output a tool generates.
associatedtype Arguments : ConvertibleFromGeneratedContent
The arguments that this tool should accept.
var description: String
A natural language description of when and how to use the tool.
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
Required Default implementation provided.
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Sendable
SendableMetatype
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool Beta
Protocol
A tool that a model can call to gather information at runtime or perform side effects.
protocol Tool : Sendable
Generating content and performing tasks with Foundation Models
Expanding generation with tool calling
Tool calling gives the model the ability to call your code to incorporate up-to-date information like recent events and data from your app. A tool includes a name and a description that the framework puts in the prompt to let the model decide when and how often to call your tool.
struct FindContacts: Tool { let name = "findContacts" let description = "Find a specific number of contacts"
@Generable struct Arguments { @Guide(description: "The number of contacts to get", .range(1...10)) let count: Int }
var contacts: [CNContact] = [] // Fetch a number of contacts using the arguments. let formattedContacts = contacts.map { "($0.givenName) ($0.familyName)" } return ToolOutput(GeneratedContent(properties: ["contactNames": formattedContacts])) } }
Tools must conform to Sendable
so the framework can run them concurrently. If the model needs to pass the output of one tool as the input to another, it executes back-to-back tool calls.
You control the life cycle of your tool, so you can track the state of it between calls to the model. For example, you might store a list of database records that you don’t want to reuse between tool calls.
A language model will call this method when it wants to leverage this tool.
Required
struct ToolOutput
A structure that contains the output a tool generates.
associatedtype Arguments : ConvertibleFromGeneratedContent
The arguments that this tool should accept.
var description: String
A natural language description of when and how to use the tool.
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
Required Default implementation provided.
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Sendable
SendableMetatype
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool Beta
Protocol
A tool that a model can call to gather information at runtime or perform side effects.
protocol Tool : Sendable
Generating content and performing tasks with Foundation Models
Expanding generation with tool calling
Tool calling gives the model the ability to call your code to incorporate up-to-date information like recent events and data from your app. A tool includes a name and a description that the framework puts in the prompt to let the model decide when and how often to call your tool.
struct FindContacts: Tool { let name = "findContacts" let description = "Find a specific number of contacts"
@Generable struct Arguments { @Guide(description: "The number of contacts to get", .range(1...10)) let count: Int }
var contacts: [CNContact] = [] // Fetch a number of contacts using the arguments. let formattedContacts = contacts.map { "($0.givenName) ($0.familyName)" } return ToolOutput(GeneratedContent(properties: ["contactNames": formattedContacts])) } }
Tools must conform to Sendable
so the framework can run them concurrently. If the model needs to pass the output of one tool as the input to another, it executes back-to-back tool calls.
You control the life cycle of your tool, so you can track the state of it between calls to the model. For example, you might store a list of database records that you don’t want to reuse between tool calls.
A language model will call this method when it wants to leverage this tool.
Required
struct ToolOutput
A structure that contains the output a tool generates.
associatedtype Arguments : ConvertibleFromGeneratedContent
The arguments that this tool should accept.
var description: String
A natural language description of when and how to use the tool.
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
Required Default implementation provided.
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Sendable
SendableMetatype
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool Beta
Protocol
A tool that a model can call to gather information at runtime or perform side effects.
protocol Tool : Sendable
Generating content and performing tasks with Foundation Models
Expanding generation with tool calling
Tool calling gives the model the ability to call your code to incorporate up-to-date information like recent events and data from your app. A tool includes a name and a description that the framework puts in the prompt to let the model decide when and how often to call your tool.
struct FindContacts: Tool { let name = "findContacts" let description = "Find a specific number of contacts"
@Generable struct Arguments { @Guide(description: "The number of contacts to get", .range(1...10)) let count: Int }
var contacts: [CNContact] = [] // Fetch a number of contacts using the arguments. let formattedContacts = contacts.map { "($0.givenName) ($0.familyName)" } return ToolOutput(GeneratedContent(properties: ["contactNames": formattedContacts])) } }
Tools must conform to Sendable
so the framework can run them concurrently. If the model needs to pass the output of one tool as the input to another, it executes back-to-back tool calls.
You control the life cycle of your tool, so you can track the state of it between calls to the model. For example, you might store a list of database records that you don’t want to reuse between tool calls.
A language model will call this method when it wants to leverage this tool.
Required
struct ToolOutput
A structure that contains the output a tool generates.
associatedtype Arguments : ConvertibleFromGeneratedContent
The arguments that this tool should accept.
var description: String
A natural language description of when and how to use the tool.
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
Required Default implementation provided.
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Sendable
SendableMetatype
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool Beta
Protocol
A tool that a model can call to gather information at runtime or perform side effects.
protocol Tool : Sendable
Generating content and performing tasks with Foundation Models
Expanding generation with tool calling
Tool calling gives the model the ability to call your code to incorporate up-to-date information like recent events and data from your app. A tool includes a name and a description that the framework puts in the prompt to let the model decide when and how often to call your tool.
struct FindContacts: Tool { let name = "findContacts" let description = "Find a specific number of contacts"
@Generable struct Arguments { @Guide(description: "The number of contacts to get", .range(1...10)) let count: Int }
var contacts: [CNContact] = [] // Fetch a number of contacts using the arguments. let formattedContacts = contacts.map { "($0.givenName) ($0.familyName)" } return ToolOutput(GeneratedContent(properties: ["contactNames": formattedContacts])) } }
Tools must conform to Sendable
so the framework can run them concurrently. If the model needs to pass the output of one tool as the input to another, it executes back-to-back tool calls.
You control the life cycle of your tool, so you can track the state of it between calls to the model. For example, you might store a list of database records that you don’t want to reuse between tool calls.
A language model will call this method when it wants to leverage this tool.
Required
struct ToolOutput
A structure that contains the output a tool generates.
associatedtype Arguments : ConvertibleFromGeneratedContent
The arguments that this tool should accept.
var description: String
A natural language description of when and how to use the tool.
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
Required Default implementation provided.
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Sendable
SendableMetatype
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Search developer.apple.comSearch Icon
- Foundation Models
- ToolOutput Beta
Structure
A structure that contains the output a tool generates.
struct ToolOutput
Expanding generation with tool calling
init(_:)
Creates a tool output with a generated encodable object.
Sendable
SendableMetatype
A language model will call this method when it wants to leverage this tool.
Required
Beta
associatedtype Arguments : ConvertibleFromGeneratedContent
The arguments that this tool should accept.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool
- Arguments Beta
Associated Type
The arguments that this tool should accept.
associatedtype Arguments : ConvertibleFromGeneratedContent
Required
Expanding generation with tool calling
Typically arguments are either a Generable
type or GeneratedContent.
A language model will call this method when it wants to leverage this tool.
Beta
struct ToolOutput
A structure that contains the output a tool generates.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool Beta
Protocol
A tool that a model can call to gather information at runtime or perform side effects.
protocol Tool : Sendable
Generating content and performing tasks with Foundation Models
Expanding generation with tool calling
Tool calling gives the model the ability to call your code to incorporate up-to-date information like recent events and data from your app. A tool includes a name and a description that the framework puts in the prompt to let the model decide when and how often to call your tool.
struct FindContacts: Tool { let name = "findContacts" let description = "Find a specific number of contacts"
@Generable struct Arguments { @Guide(description: "The number of contacts to get", .range(1...10)) let count: Int }
var contacts: [CNContact] = [] // Fetch a number of contacts using the arguments. let formattedContacts = contacts.map { "($0.givenName) ($0.familyName)" } return ToolOutput(GeneratedContent(properties: ["contactNames": formattedContacts])) } }
Tools must conform to Sendable
so the framework can run them concurrently. If the model needs to pass the output of one tool as the input to another, it executes back-to-back tool calls.
You control the life cycle of your tool, so you can track the state of it between calls to the model. For example, you might store a list of database records that you don’t want to reuse between tool calls.
A language model will call this method when it wants to leverage this tool.
Required
struct ToolOutput
A structure that contains the output a tool generates.
associatedtype Arguments : ConvertibleFromGeneratedContent
The arguments that this tool should accept.
var description: String
A natural language description of when and how to use the tool.
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
Required Default implementation provided.
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Sendable
SendableMetatype
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool
- description Beta
Instance Property
A natural language description of when and how to use the tool.
var description: String { get }
Required
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
Required Default implementation provided.
Beta
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool
- includesSchemaInInstructions Beta
Instance Property
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
var includesSchemaInInstructions: Bool { get }
Required Default implementation provided.
The default implementation is true
var includesSchemaInInstructions: Bool
var description: String
A natural language description of when and how to use the tool.
Required
Beta
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool
- name Beta
Instance Property
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var name: String { get }
Required Default implementation provided.
var name: String
var description: String
A natural language description of when and how to use the tool.
Required
Beta
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool
- parameters Beta
Instance Property
A schema for the parameters this tool accepts.
var parameters: GenerationSchema { get }
Required Default implementation provided.
var parameters: GenerationSchema
var description: String
A natural language description of when and how to use the tool.
Required
Beta
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool Beta
Protocol
A tool that a model can call to gather information at runtime or perform side effects.
protocol Tool : Sendable
Generating content and performing tasks with Foundation Models
Expanding generation with tool calling
Tool calling gives the model the ability to call your code to incorporate up-to-date information like recent events and data from your app. A tool includes a name and a description that the framework puts in the prompt to let the model decide when and how often to call your tool.
struct FindContacts: Tool { let name = "findContacts" let description = "Find a specific number of contacts"
@Generable struct Arguments { @Guide(description: "The number of contacts to get", .range(1...10)) let count: Int }
var contacts: [CNContact] = [] // Fetch a number of contacts using the arguments. let formattedContacts = contacts.map { "($0.givenName) ($0.familyName)" } return ToolOutput(GeneratedContent(properties: ["contactNames": formattedContacts])) } }
Tools must conform to Sendable
so the framework can run them concurrently. If the model needs to pass the output of one tool as the input to another, it executes back-to-back tool calls.
You control the life cycle of your tool, so you can track the state of it between calls to the model. For example, you might store a list of database records that you don’t want to reuse between tool calls.
A language model will call this method when it wants to leverage this tool.
Required
struct ToolOutput
A structure that contains the output a tool generates.
associatedtype Arguments : ConvertibleFromGeneratedContent
The arguments that this tool should accept.
var description: String
A natural language description of when and how to use the tool.
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
Required Default implementation provided.
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Sendable
SendableMetatype
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool Beta
Protocol
A tool that a model can call to gather information at runtime or perform side effects.
protocol Tool : Sendable
Generating content and performing tasks with Foundation Models
Expanding generation with tool calling
Tool calling gives the model the ability to call your code to incorporate up-to-date information like recent events and data from your app. A tool includes a name and a description that the framework puts in the prompt to let the model decide when and how often to call your tool.
struct FindContacts: Tool { let name = "findContacts" let description = "Find a specific number of contacts"
@Generable struct Arguments { @Guide(description: "The number of contacts to get", .range(1...10)) let count: Int }
var contacts: [CNContact] = [] // Fetch a number of contacts using the arguments. let formattedContacts = contacts.map { "($0.givenName) ($0.familyName)" } return ToolOutput(GeneratedContent(properties: ["contactNames": formattedContacts])) } }
Tools must conform to Sendable
so the framework can run them concurrently. If the model needs to pass the output of one tool as the input to another, it executes back-to-back tool calls.
You control the life cycle of your tool, so you can track the state of it between calls to the model. For example, you might store a list of database records that you don’t want to reuse between tool calls.
A language model will call this method when it wants to leverage this tool.
Required
struct ToolOutput
A structure that contains the output a tool generates.
associatedtype Arguments : ConvertibleFromGeneratedContent
The arguments that this tool should accept.
var description: String
A natural language description of when and how to use the tool.
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
Required Default implementation provided.
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Sendable
SendableMetatype
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool Beta
Protocol
A tool that a model can call to gather information at runtime or perform side effects.
protocol Tool : Sendable
Generating content and performing tasks with Foundation Models
Expanding generation with tool calling
Tool calling gives the model the ability to call your code to incorporate up-to-date information like recent events and data from your app. A tool includes a name and a description that the framework puts in the prompt to let the model decide when and how often to call your tool.
struct FindContacts: Tool { let name = "findContacts" let description = "Find a specific number of contacts"
@Generable struct Arguments { @Guide(description: "The number of contacts to get", .range(1...10)) let count: Int }
var contacts: [CNContact] = [] // Fetch a number of contacts using the arguments. let formattedContacts = contacts.map { "($0.givenName) ($0.familyName)" } return ToolOutput(GeneratedContent(properties: ["contactNames": formattedContacts])) } }
Tools must conform to Sendable
so the framework can run them concurrently. If the model needs to pass the output of one tool as the input to another, it executes back-to-back tool calls.
You control the life cycle of your tool, so you can track the state of it between calls to the model. For example, you might store a list of database records that you don’t want to reuse between tool calls.
A language model will call this method when it wants to leverage this tool.
Required
struct ToolOutput
A structure that contains the output a tool generates.
associatedtype Arguments : ConvertibleFromGeneratedContent
The arguments that this tool should accept.
var description: String
A natural language description of when and how to use the tool.
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
Required Default implementation provided.
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Sendable
SendableMetatype
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool Beta
Protocol
A tool that a model can call to gather information at runtime or perform side effects.
protocol Tool : Sendable
Generating content and performing tasks with Foundation Models
Expanding generation with tool calling
Tool calling gives the model the ability to call your code to incorporate up-to-date information like recent events and data from your app. A tool includes a name and a description that the framework puts in the prompt to let the model decide when and how often to call your tool.
struct FindContacts: Tool { let name = "findContacts" let description = "Find a specific number of contacts"
@Generable struct Arguments { @Guide(description: "The number of contacts to get", .range(1...10)) let count: Int }
var contacts: [CNContact] = [] // Fetch a number of contacts using the arguments. let formattedContacts = contacts.map { "($0.givenName) ($0.familyName)" } return ToolOutput(GeneratedContent(properties: ["contactNames": formattedContacts])) } }
Tools must conform to Sendable
so the framework can run them concurrently. If the model needs to pass the output of one tool as the input to another, it executes back-to-back tool calls.
You control the life cycle of your tool, so you can track the state of it between calls to the model. For example, you might store a list of database records that you don’t want to reuse between tool calls.
A language model will call this method when it wants to leverage this tool.
Required
struct ToolOutput
A structure that contains the output a tool generates.
associatedtype Arguments : ConvertibleFromGeneratedContent
The arguments that this tool should accept.
var description: String
A natural language description of when and how to use the tool.
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
Required Default implementation provided.
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Sendable
SendableMetatype
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Tool Beta
Protocol
A tool that a model can call to gather information at runtime or perform side effects.
protocol Tool : Sendable
Generating content and performing tasks with Foundation Models
Expanding generation with tool calling
Tool calling gives the model the ability to call your code to incorporate up-to-date information like recent events and data from your app. A tool includes a name and a description that the framework puts in the prompt to let the model decide when and how often to call your tool.
struct FindContacts: Tool { let name = "findContacts" let description = "Find a specific number of contacts"
@Generable struct Arguments { @Guide(description: "The number of contacts to get", .range(1...10)) let count: Int }
var contacts: [CNContact] = [] // Fetch a number of contacts using the arguments. let formattedContacts = contacts.map { "($0.givenName) ($0.familyName)" } return ToolOutput(GeneratedContent(properties: ["contactNames": formattedContacts])) } }
Tools must conform to Sendable
so the framework can run them concurrently. If the model needs to pass the output of one tool as the input to another, it executes back-to-back tool calls.
You control the life cycle of your tool, so you can track the state of it between calls to the model. For example, you might store a list of database records that you don’t want to reuse between tool calls.
A language model will call this method when it wants to leverage this tool.
Required
struct ToolOutput
A structure that contains the output a tool generates.
associatedtype Arguments : ConvertibleFromGeneratedContent
The arguments that this tool should accept.
var description: String
A natural language description of when and how to use the tool.
var includesSchemaInInstructions: Bool
If true, the model’s name, description, and parameters schema will be injected into the instructions of sessions that leverage this tool.
Required Default implementation provided.
var name: String
A unique name for the tool, such as “get_weather”, “toggleDarkMode”, or “search contacts”.
var parameters: GenerationSchema
A schema for the parameters this tool accepts.
Sendable
SendableMetatype
Build tools that enable the model to perform tasks that are specific to your use case.
Generate dynamic game content with guided generation and tools
Make gameplay more lively with AI generated dialog and encounters personalized to the player.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
)#app-main)
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.UseCase Beta
Structure
A type that represents the use case for prompting.
struct UseCase
static let general: SystemLanguageModel.UseCase
A use case for general prompting.
static let contentTagging: SystemLanguageModel.UseCase
A use case for content tagging.
Returns a Boolean value indicating whether two values are equal.
Equatable
Sendable
SendableMetatype
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.UseCase Beta
Structure
A type that represents the use case for prompting.
struct UseCase
static let general: SystemLanguageModel.UseCase
A use case for general prompting.
static let contentTagging: SystemLanguageModel.UseCase
A use case for content tagging.
Returns a Boolean value indicating whether two values are equal.
Equatable
Sendable
SendableMetatype
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/#Getting-the-use-cases
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.UseCase Beta
Structure
A type that represents the use case for prompting.
struct UseCase
static let general: SystemLanguageModel.UseCase
A use case for general prompting.
static let contentTagging: SystemLanguageModel.UseCase
A use case for content tagging.
Returns a Boolean value indicating whether two values are equal.
Equatable
Sendable
SendableMetatype
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.UseCase
- general Beta
Type Property
A use case for general prompting.
static let general: SystemLanguageModel.UseCase
This is the default use case for the base version of the model, so if you use SystemLanguageModel.default
, you don’t need to specify a use case.
static let contentTagging: SystemLanguageModel.UseCase
A use case for content tagging.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/contenttagging
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.UseCase
- contentTagging Beta
Type Property
A use case for content tagging.
static let contentTagging: SystemLanguageModel.UseCase
Content tagging produces a list of categorizing tags based on the input prompt. When specializing the model for the contentTagging
use case, it always responds with tags. The tagging capabilities of the model include detecting topics, emotions, actions, and objects.
// Create an instance of the model with the content tagging use case. let model = SystemLanguageModel(useCase: .contentTagging)
// Initialize a session with the model. let session = LanguageModelSession(model: model)
If you don’t provide Instructions
to the session, the model generates topic-related tags by default. To generate other kinds of tags, like emotions, actions, or objects, specify the kind of tag either in instructions or in a Generable
output type.
let instructions = """
Tag the three most important actions, emotions, objects,
and topics in the given input text
"""
The code below prompts the model to respond about a picnic at the beach with tags like “outdoor activity,” “beach,” and “picnic”:
let prompt = """ Today we had a lovely picnic with friends at the beach. """ let response = try await session.respond( to: prompt, generating: ContentTaggingResult.self )
Content tagging works best if you define an output structure using guided generation. The code below uses Generable
guide descriptions to specify the kinds and quantities of tags the model should produce:
@Generable struct ContentTaggingResult { @Guide( description: "Most important actions in the input text", .maximumCount(3) ) let actions: [String] @Guide( description: "Most important emotions in the input text", .maximumCount(3) ) let emotions: [String] @Guide( description: "Most important objects in the input text", .maximumCount(3) ) let objects: [String] @Guide( description: "Most important topics in the input text", .maximumCount(3) ) let topics: [String] }
static let general: SystemLanguageModel.UseCase
A use case for general prompting.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/#Comparing-use-cases
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.UseCase Beta
Structure
A type that represents the use case for prompting.
struct UseCase
static let general: SystemLanguageModel.UseCase
A use case for general prompting.
static let contentTagging: SystemLanguageModel.UseCase
A use case for content tagging.
Returns a Boolean value indicating whether two values are equal.
Equatable
Sendable
SendableMetatype
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/#Default-Implementations
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.UseCase Beta
Structure
A type that represents the use case for prompting.
struct UseCase
static let general: SystemLanguageModel.UseCase
A use case for general prompting.
static let contentTagging: SystemLanguageModel.UseCase
A use case for content tagging.
Returns a Boolean value indicating whether two values are equal.
Equatable
Sendable
SendableMetatype
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/equatable-implementations
Collection
- Foundation Models
- SystemLanguageModel.UseCase
- Equatable Implementations
API Collection
Returns a Boolean value indicating whether two values are not equal.
Beta
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/#relationships
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.UseCase Beta
Structure
A type that represents the use case for prompting.
struct UseCase
static let general: SystemLanguageModel.UseCase
A use case for general prompting.
static let contentTagging: SystemLanguageModel.UseCase
A use case for content tagging.
Returns a Boolean value indicating whether two values are equal.
Equatable
Sendable
SendableMetatype
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.UseCase Beta
Structure
A type that represents the use case for prompting.
struct UseCase
static let general: SystemLanguageModel.UseCase
A use case for general prompting.
static let contentTagging: SystemLanguageModel.UseCase
A use case for content tagging.
Returns a Boolean value indicating whether two values are equal.
Equatable
Sendable
SendableMetatype
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.UseCase Beta
Structure
A type that represents the use case for prompting.
struct UseCase
static let general: SystemLanguageModel.UseCase
A use case for general prompting.
static let contentTagging: SystemLanguageModel.UseCase
A use case for content tagging.
Returns a Boolean value indicating whether two values are equal.
Equatable
Sendable
SendableMetatype
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.UseCase Beta
Structure
A type that represents the use case for prompting.
struct UseCase
static let general: SystemLanguageModel.UseCase
A use case for general prompting.
static let contentTagging: SystemLanguageModel.UseCase
A use case for content tagging.
Returns a Boolean value indicating whether two values are equal.
Equatable
Sendable
SendableMetatype
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.UseCase Beta
Structure
A type that represents the use case for prompting.
struct UseCase
static let general: SystemLanguageModel.UseCase
A use case for general prompting.
static let contentTagging: SystemLanguageModel.UseCase
A use case for content tagging.
Returns a Boolean value indicating whether two values are equal.
Equatable
Sendable
SendableMetatype
Generating content and performing tasks with Foundation Models
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/contenttagging)
Search developer.apple.comSearch Icon
)#app-main)
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/equatable-implementations)
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models#app-main
- Foundation Models
- Generating content and performing tasks with Foundation Models
Article
Enhance the experience in your app by prompting an on-device large language model.
The Foundation Models framework lets you tap into the on-device large models at the core of Apple Intelligence. You can enhance your app by using generative models to create content or perform tasks. The framework supports language understanding and generation based on models capabilities like text extraction and summarization that you can use to:
-
Generate a title, description, or tags for content
-
Generate a list of search suggestions relevant to your app
-
Transform product reviews into structured data you can visualize
-
Invoke your own tools to assist the model with performing app-specific tasks
Check model availability by creating an instance of SystemLanguageModel
with the default
property. The default
property provides the same model Apple Intelligence uses, and supports text generation.
Model availability depends on device factors like:
-
The device must support Apple Intelligence.
-
The device must have Apple Intelligence turned on in System Settings.
-
The device must have sufficient battery and not be in Game Mode.
Always verify model availability first, and plan for a fallback experience in case the model is unavailable.
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
After confirming that the model is available, create a LanguageModelSession
object to call the model. For a single-turn interaction, create a new session each time you call the model:
// Create a session with the system model. let session = LanguageModelSession()
For a multiturn interaction — where the model retains some knowledge of what it produced — reuse the same session each time you call the model.
A Prompt
is an input that the model responds to. Prompt engineering is the art of designing high-quality prompts so that the model generates a best possible response for the request you make. A prompt can be as short as “hello”, or as long as multiple paragraphs. The process of designing a prompt involves a lot of exploration to discover the best prompt, and involves optimizing prompt length and writing style.
When thinking about the prompt you want to use in your app, consider using conversational language in the form of a question or command. For example, “What’s a good month to visit Paris?” or “Generate a food truck menu.”
Write prompts that focus on a single and specific task, like “Write a profile for the dog breed Siberian Husky”. When a prompt is long and complicated, the model takes longer to respond, and may respond in unpredictable ways. If you have a complex generation task in mind, break the task down into a series of specific prompts.
You can refine your prompt by telling the model exactly how much content it should generate. A prompt like, “Write a profile for the dog breed Siberian Husky” often takes a long time to process as the model generates a full multi-paragraph essay. If you specify “using three sentences”, it speeds up processing and generates a concise summary. Use phrases like “in a single sentence” or “in a few words” to shorten the generation time and produce shorter text.
// Generate a longer response for a specific command. let simple = "Write me a story about pears."
// Quickly generate a concise response. let quick = "Write the profile for the dog breed Siberian Husky using three sentences."
Instructions
help steer the model in a way that fits the use case of your app. The model obeys prompts at a lower priority than the instructions you provide. When you provide instructions to the model, consider specifying details like:
-
What the model’s role is; for example, “You are a mentor,” or “You are a movie critic”.
-
What the model should do, like “Help the person extract calendar events,” or “Help the person by recommending search suggestions”.
-
What the style preferences are, like “Respond as briefly as possible”.
-
What the possible safety measures are, like “Respond with ‘I can’t help with that’ if you’re asked to do something dangerous”.
Use content you trust in instructions because the model follows them more closely than the prompt itself. When you initialize a session with instructions, it affects all prompts the model responds to in that session. Instructions can also include example responses to help steer the model. When you add examples to your prompt, you provide the model with a template that shows the model what a good response looks like.
To call the model with a prompt, call respond(to:options:isolation:)
on your session. The response call is asynchronous because it may take a few seconds for Foundation Models to generate the response.
let instructions = """
Suggest five related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Instead of working with raw string output from the model, the framework offers guided generation to generate a custom Swift data structure you define. For more information about guided generation, see Generating Swift data structures with guided generation.
When you make a request to the model, you can provide custom tools to help the model complete the request. If the model determines that a Tool
can assist with the request, the framework calls your Tool
to perform additional actions like retrieving content from your local database. For more information about tool calling, see Expanding generation with tool calling
To get the best results for your prompt, experiment with different generation options. GenerationOptions
affects the runtime parameters of the model, and you can customize them for every request you make.
// Customize the temperature to increase creativity. let options = GenerationOptions(temperature: 2.0)
let session = LanguageModelSession()
let prompt = "Write me a story about coffee." let response = try await session.respond( to: prompt, options: options )
When you test apps that use the framework, use Xcode Instruments to understand more about the requests you make, like the time it takes to perform a request. When you make a request, you can access the Transcript
entries that describe the actions the model takes during your LanguageModelSession
.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models/#overview
- Foundation Models
- Generating content and performing tasks with Foundation Models
Article
Enhance the experience in your app by prompting an on-device large language model.
The Foundation Models framework lets you tap into the on-device large models at the core of Apple Intelligence. You can enhance your app by using generative models to create content or perform tasks. The framework supports language understanding and generation based on models capabilities like text extraction and summarization that you can use to:
-
Generate a title, description, or tags for content
-
Generate a list of search suggestions relevant to your app
-
Transform product reviews into structured data you can visualize
-
Invoke your own tools to assist the model with performing app-specific tasks
Check model availability by creating an instance of SystemLanguageModel
with the default
property. The default
property provides the same model Apple Intelligence uses, and supports text generation.
Model availability depends on device factors like:
-
The device must support Apple Intelligence.
-
The device must have Apple Intelligence turned on in System Settings.
-
The device must have sufficient battery and not be in Game Mode.
Always verify model availability first, and plan for a fallback experience in case the model is unavailable.
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
After confirming that the model is available, create a LanguageModelSession
object to call the model. For a single-turn interaction, create a new session each time you call the model:
// Create a session with the system model. let session = LanguageModelSession()
For a multiturn interaction — where the model retains some knowledge of what it produced — reuse the same session each time you call the model.
A Prompt
is an input that the model responds to. Prompt engineering is the art of designing high-quality prompts so that the model generates a best possible response for the request you make. A prompt can be as short as “hello”, or as long as multiple paragraphs. The process of designing a prompt involves a lot of exploration to discover the best prompt, and involves optimizing prompt length and writing style.
When thinking about the prompt you want to use in your app, consider using conversational language in the form of a question or command. For example, “What’s a good month to visit Paris?” or “Generate a food truck menu.”
Write prompts that focus on a single and specific task, like “Write a profile for the dog breed Siberian Husky”. When a prompt is long and complicated, the model takes longer to respond, and may respond in unpredictable ways. If you have a complex generation task in mind, break the task down into a series of specific prompts.
You can refine your prompt by telling the model exactly how much content it should generate. A prompt like, “Write a profile for the dog breed Siberian Husky” often takes a long time to process as the model generates a full multi-paragraph essay. If you specify “using three sentences”, it speeds up processing and generates a concise summary. Use phrases like “in a single sentence” or “in a few words” to shorten the generation time and produce shorter text.
// Generate a longer response for a specific command. let simple = "Write me a story about pears."
// Quickly generate a concise response. let quick = "Write the profile for the dog breed Siberian Husky using three sentences."
Instructions
help steer the model in a way that fits the use case of your app. The model obeys prompts at a lower priority than the instructions you provide. When you provide instructions to the model, consider specifying details like:
-
What the model’s role is; for example, “You are a mentor,” or “You are a movie critic”.
-
What the model should do, like “Help the person extract calendar events,” or “Help the person by recommending search suggestions”.
-
What the style preferences are, like “Respond as briefly as possible”.
-
What the possible safety measures are, like “Respond with ‘I can’t help with that’ if you’re asked to do something dangerous”.
Use content you trust in instructions because the model follows them more closely than the prompt itself. When you initialize a session with instructions, it affects all prompts the model responds to in that session. Instructions can also include example responses to help steer the model. When you add examples to your prompt, you provide the model with a template that shows the model what a good response looks like.
To call the model with a prompt, call respond(to:options:isolation:)
on your session. The response call is asynchronous because it may take a few seconds for Foundation Models to generate the response.
let instructions = """
Suggest five related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Instead of working with raw string output from the model, the framework offers guided generation to generate a custom Swift data structure you define. For more information about guided generation, see Generating Swift data structures with guided generation.
When you make a request to the model, you can provide custom tools to help the model complete the request. If the model determines that a Tool
can assist with the request, the framework calls your Tool
to perform additional actions like retrieving content from your local database. For more information about tool calling, see Expanding generation with tool calling
To get the best results for your prompt, experiment with different generation options. GenerationOptions
affects the runtime parameters of the model, and you can customize them for every request you make.
// Customize the temperature to increase creativity. let options = GenerationOptions(temperature: 2.0)
let session = LanguageModelSession()
let prompt = "Write me a story about coffee." let response = try await session.respond( to: prompt, options: options )
When you test apps that use the framework, use Xcode Instruments to understand more about the requests you make, like the time it takes to perform a request. When you make a request, you can access the Transcript
entries that describe the actions the model takes during your LanguageModelSession
.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models/#Check-for-availability
- Foundation Models
- Generating content and performing tasks with Foundation Models
Article
Enhance the experience in your app by prompting an on-device large language model.
The Foundation Models framework lets you tap into the on-device large models at the core of Apple Intelligence. You can enhance your app by using generative models to create content or perform tasks. The framework supports language understanding and generation based on models capabilities like text extraction and summarization that you can use to:
-
Generate a title, description, or tags for content
-
Generate a list of search suggestions relevant to your app
-
Transform product reviews into structured data you can visualize
-
Invoke your own tools to assist the model with performing app-specific tasks
Check model availability by creating an instance of SystemLanguageModel
with the default
property. The default
property provides the same model Apple Intelligence uses, and supports text generation.
Model availability depends on device factors like:
-
The device must support Apple Intelligence.
-
The device must have Apple Intelligence turned on in System Settings.
-
The device must have sufficient battery and not be in Game Mode.
Always verify model availability first, and plan for a fallback experience in case the model is unavailable.
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
After confirming that the model is available, create a LanguageModelSession
object to call the model. For a single-turn interaction, create a new session each time you call the model:
// Create a session with the system model. let session = LanguageModelSession()
For a multiturn interaction — where the model retains some knowledge of what it produced — reuse the same session each time you call the model.
A Prompt
is an input that the model responds to. Prompt engineering is the art of designing high-quality prompts so that the model generates a best possible response for the request you make. A prompt can be as short as “hello”, or as long as multiple paragraphs. The process of designing a prompt involves a lot of exploration to discover the best prompt, and involves optimizing prompt length and writing style.
When thinking about the prompt you want to use in your app, consider using conversational language in the form of a question or command. For example, “What’s a good month to visit Paris?” or “Generate a food truck menu.”
Write prompts that focus on a single and specific task, like “Write a profile for the dog breed Siberian Husky”. When a prompt is long and complicated, the model takes longer to respond, and may respond in unpredictable ways. If you have a complex generation task in mind, break the task down into a series of specific prompts.
You can refine your prompt by telling the model exactly how much content it should generate. A prompt like, “Write a profile for the dog breed Siberian Husky” often takes a long time to process as the model generates a full multi-paragraph essay. If you specify “using three sentences”, it speeds up processing and generates a concise summary. Use phrases like “in a single sentence” or “in a few words” to shorten the generation time and produce shorter text.
// Generate a longer response for a specific command. let simple = "Write me a story about pears."
// Quickly generate a concise response. let quick = "Write the profile for the dog breed Siberian Husky using three sentences."
Instructions
help steer the model in a way that fits the use case of your app. The model obeys prompts at a lower priority than the instructions you provide. When you provide instructions to the model, consider specifying details like:
-
What the model’s role is; for example, “You are a mentor,” or “You are a movie critic”.
-
What the model should do, like “Help the person extract calendar events,” or “Help the person by recommending search suggestions”.
-
What the style preferences are, like “Respond as briefly as possible”.
-
What the possible safety measures are, like “Respond with ‘I can’t help with that’ if you’re asked to do something dangerous”.
Use content you trust in instructions because the model follows them more closely than the prompt itself. When you initialize a session with instructions, it affects all prompts the model responds to in that session. Instructions can also include example responses to help steer the model. When you add examples to your prompt, you provide the model with a template that shows the model what a good response looks like.
To call the model with a prompt, call respond(to:options:isolation:)
on your session. The response call is asynchronous because it may take a few seconds for Foundation Models to generate the response.
let instructions = """
Suggest five related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Instead of working with raw string output from the model, the framework offers guided generation to generate a custom Swift data structure you define. For more information about guided generation, see Generating Swift data structures with guided generation.
When you make a request to the model, you can provide custom tools to help the model complete the request. If the model determines that a Tool
can assist with the request, the framework calls your Tool
to perform additional actions like retrieving content from your local database. For more information about tool calling, see Expanding generation with tool calling
To get the best results for your prompt, experiment with different generation options. GenerationOptions
affects the runtime parameters of the model, and you can customize them for every request you make.
// Customize the temperature to increase creativity. let options = GenerationOptions(temperature: 2.0)
let session = LanguageModelSession()
let prompt = "Write me a story about coffee." let response = try await session.respond( to: prompt, options: options )
When you test apps that use the framework, use Xcode Instruments to understand more about the requests you make, like the time it takes to perform a request. When you make a request, you can access the Transcript
entries that describe the actions the model takes during your LanguageModelSession
.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
- Foundation Models
- SystemLanguageModel
- default Beta
Type Property
The base version of the model.
static let default
: SystemLanguageModel
Generating content and performing tasks with Foundation Models
The base model is a generic model that is useful for a wide variety of applications, but is not specialized to any particular use case.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models/#Create-a-session
- Foundation Models
- Generating content and performing tasks with Foundation Models
Article
Enhance the experience in your app by prompting an on-device large language model.
The Foundation Models framework lets you tap into the on-device large models at the core of Apple Intelligence. You can enhance your app by using generative models to create content or perform tasks. The framework supports language understanding and generation based on models capabilities like text extraction and summarization that you can use to:
-
Generate a title, description, or tags for content
-
Generate a list of search suggestions relevant to your app
-
Transform product reviews into structured data you can visualize
-
Invoke your own tools to assist the model with performing app-specific tasks
Check model availability by creating an instance of SystemLanguageModel
with the default
property. The default
property provides the same model Apple Intelligence uses, and supports text generation.
Model availability depends on device factors like:
-
The device must support Apple Intelligence.
-
The device must have Apple Intelligence turned on in System Settings.
-
The device must have sufficient battery and not be in Game Mode.
Always verify model availability first, and plan for a fallback experience in case the model is unavailable.
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
After confirming that the model is available, create a LanguageModelSession
object to call the model. For a single-turn interaction, create a new session each time you call the model:
// Create a session with the system model. let session = LanguageModelSession()
For a multiturn interaction — where the model retains some knowledge of what it produced — reuse the same session each time you call the model.
A Prompt
is an input that the model responds to. Prompt engineering is the art of designing high-quality prompts so that the model generates a best possible response for the request you make. A prompt can be as short as “hello”, or as long as multiple paragraphs. The process of designing a prompt involves a lot of exploration to discover the best prompt, and involves optimizing prompt length and writing style.
When thinking about the prompt you want to use in your app, consider using conversational language in the form of a question or command. For example, “What’s a good month to visit Paris?” or “Generate a food truck menu.”
Write prompts that focus on a single and specific task, like “Write a profile for the dog breed Siberian Husky”. When a prompt is long and complicated, the model takes longer to respond, and may respond in unpredictable ways. If you have a complex generation task in mind, break the task down into a series of specific prompts.
You can refine your prompt by telling the model exactly how much content it should generate. A prompt like, “Write a profile for the dog breed Siberian Husky” often takes a long time to process as the model generates a full multi-paragraph essay. If you specify “using three sentences”, it speeds up processing and generates a concise summary. Use phrases like “in a single sentence” or “in a few words” to shorten the generation time and produce shorter text.
// Generate a longer response for a specific command. let simple = "Write me a story about pears."
// Quickly generate a concise response. let quick = "Write the profile for the dog breed Siberian Husky using three sentences."
Instructions
help steer the model in a way that fits the use case of your app. The model obeys prompts at a lower priority than the instructions you provide. When you provide instructions to the model, consider specifying details like:
-
What the model’s role is; for example, “You are a mentor,” or “You are a movie critic”.
-
What the model should do, like “Help the person extract calendar events,” or “Help the person by recommending search suggestions”.
-
What the style preferences are, like “Respond as briefly as possible”.
-
What the possible safety measures are, like “Respond with ‘I can’t help with that’ if you’re asked to do something dangerous”.
Use content you trust in instructions because the model follows them more closely than the prompt itself. When you initialize a session with instructions, it affects all prompts the model responds to in that session. Instructions can also include example responses to help steer the model. When you add examples to your prompt, you provide the model with a template that shows the model what a good response looks like.
To call the model with a prompt, call respond(to:options:isolation:)
on your session. The response call is asynchronous because it may take a few seconds for Foundation Models to generate the response.
let instructions = """
Suggest five related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Instead of working with raw string output from the model, the framework offers guided generation to generate a custom Swift data structure you define. For more information about guided generation, see Generating Swift data structures with guided generation.
When you make a request to the model, you can provide custom tools to help the model complete the request. If the model determines that a Tool
can assist with the request, the framework calls your Tool
to perform additional actions like retrieving content from your local database. For more information about tool calling, see Expanding generation with tool calling
To get the best results for your prompt, experiment with different generation options. GenerationOptions
affects the runtime parameters of the model, and you can customize them for every request you make.
// Customize the temperature to increase creativity. let options = GenerationOptions(temperature: 2.0)
let session = LanguageModelSession()
let prompt = "Write me a story about coffee." let response = try await session.respond( to: prompt, options: options )
When you test apps that use the framework, use Xcode Instruments to understand more about the requests you make, like the time it takes to perform a request. When you make a request, you can access the Transcript
entries that describe the actions the model takes during your LanguageModelSession
.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models/#Provide-a-prompt-to-the-model
- Foundation Models
- Generating content and performing tasks with Foundation Models
Article
Enhance the experience in your app by prompting an on-device large language model.
The Foundation Models framework lets you tap into the on-device large models at the core of Apple Intelligence. You can enhance your app by using generative models to create content or perform tasks. The framework supports language understanding and generation based on models capabilities like text extraction and summarization that you can use to:
-
Generate a title, description, or tags for content
-
Generate a list of search suggestions relevant to your app
-
Transform product reviews into structured data you can visualize
-
Invoke your own tools to assist the model with performing app-specific tasks
Check model availability by creating an instance of SystemLanguageModel
with the default
property. The default
property provides the same model Apple Intelligence uses, and supports text generation.
Model availability depends on device factors like:
-
The device must support Apple Intelligence.
-
The device must have Apple Intelligence turned on in System Settings.
-
The device must have sufficient battery and not be in Game Mode.
Always verify model availability first, and plan for a fallback experience in case the model is unavailable.
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
After confirming that the model is available, create a LanguageModelSession
object to call the model. For a single-turn interaction, create a new session each time you call the model:
// Create a session with the system model. let session = LanguageModelSession()
For a multiturn interaction — where the model retains some knowledge of what it produced — reuse the same session each time you call the model.
A Prompt
is an input that the model responds to. Prompt engineering is the art of designing high-quality prompts so that the model generates a best possible response for the request you make. A prompt can be as short as “hello”, or as long as multiple paragraphs. The process of designing a prompt involves a lot of exploration to discover the best prompt, and involves optimizing prompt length and writing style.
When thinking about the prompt you want to use in your app, consider using conversational language in the form of a question or command. For example, “What’s a good month to visit Paris?” or “Generate a food truck menu.”
Write prompts that focus on a single and specific task, like “Write a profile for the dog breed Siberian Husky”. When a prompt is long and complicated, the model takes longer to respond, and may respond in unpredictable ways. If you have a complex generation task in mind, break the task down into a series of specific prompts.
You can refine your prompt by telling the model exactly how much content it should generate. A prompt like, “Write a profile for the dog breed Siberian Husky” often takes a long time to process as the model generates a full multi-paragraph essay. If you specify “using three sentences”, it speeds up processing and generates a concise summary. Use phrases like “in a single sentence” or “in a few words” to shorten the generation time and produce shorter text.
// Generate a longer response for a specific command. let simple = "Write me a story about pears."
// Quickly generate a concise response. let quick = "Write the profile for the dog breed Siberian Husky using three sentences."
Instructions
help steer the model in a way that fits the use case of your app. The model obeys prompts at a lower priority than the instructions you provide. When you provide instructions to the model, consider specifying details like:
-
What the model’s role is; for example, “You are a mentor,” or “You are a movie critic”.
-
What the model should do, like “Help the person extract calendar events,” or “Help the person by recommending search suggestions”.
-
What the style preferences are, like “Respond as briefly as possible”.
-
What the possible safety measures are, like “Respond with ‘I can’t help with that’ if you’re asked to do something dangerous”.
Use content you trust in instructions because the model follows them more closely than the prompt itself. When you initialize a session with instructions, it affects all prompts the model responds to in that session. Instructions can also include example responses to help steer the model. When you add examples to your prompt, you provide the model with a template that shows the model what a good response looks like.
To call the model with a prompt, call respond(to:options:isolation:)
on your session. The response call is asynchronous because it may take a few seconds for Foundation Models to generate the response.
let instructions = """
Suggest five related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Instead of working with raw string output from the model, the framework offers guided generation to generate a custom Swift data structure you define. For more information about guided generation, see Generating Swift data structures with guided generation.
When you make a request to the model, you can provide custom tools to help the model complete the request. If the model determines that a Tool
can assist with the request, the framework calls your Tool
to perform additional actions like retrieving content from your local database. For more information about tool calling, see Expanding generation with tool calling
To get the best results for your prompt, experiment with different generation options. GenerationOptions
affects the runtime parameters of the model, and you can customize them for every request you make.
// Customize the temperature to increase creativity. let options = GenerationOptions(temperature: 2.0)
let session = LanguageModelSession()
let prompt = "Write me a story about coffee." let response = try await session.respond( to: prompt, options: options )
When you test apps that use the framework, use Xcode Instruments to understand more about the requests you make, like the time it takes to perform a request. When you make a request, you can access the Transcript
entries that describe the actions the model takes during your LanguageModelSession
.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models/#Provide-instructions-to-the-model
- Foundation Models
- Generating content and performing tasks with Foundation Models
Article
Enhance the experience in your app by prompting an on-device large language model.
The Foundation Models framework lets you tap into the on-device large models at the core of Apple Intelligence. You can enhance your app by using generative models to create content or perform tasks. The framework supports language understanding and generation based on models capabilities like text extraction and summarization that you can use to:
-
Generate a title, description, or tags for content
-
Generate a list of search suggestions relevant to your app
-
Transform product reviews into structured data you can visualize
-
Invoke your own tools to assist the model with performing app-specific tasks
Check model availability by creating an instance of SystemLanguageModel
with the default
property. The default
property provides the same model Apple Intelligence uses, and supports text generation.
Model availability depends on device factors like:
-
The device must support Apple Intelligence.
-
The device must have Apple Intelligence turned on in System Settings.
-
The device must have sufficient battery and not be in Game Mode.
Always verify model availability first, and plan for a fallback experience in case the model is unavailable.
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
After confirming that the model is available, create a LanguageModelSession
object to call the model. For a single-turn interaction, create a new session each time you call the model:
// Create a session with the system model. let session = LanguageModelSession()
For a multiturn interaction — where the model retains some knowledge of what it produced — reuse the same session each time you call the model.
A Prompt
is an input that the model responds to. Prompt engineering is the art of designing high-quality prompts so that the model generates a best possible response for the request you make. A prompt can be as short as “hello”, or as long as multiple paragraphs. The process of designing a prompt involves a lot of exploration to discover the best prompt, and involves optimizing prompt length and writing style.
When thinking about the prompt you want to use in your app, consider using conversational language in the form of a question or command. For example, “What’s a good month to visit Paris?” or “Generate a food truck menu.”
Write prompts that focus on a single and specific task, like “Write a profile for the dog breed Siberian Husky”. When a prompt is long and complicated, the model takes longer to respond, and may respond in unpredictable ways. If you have a complex generation task in mind, break the task down into a series of specific prompts.
You can refine your prompt by telling the model exactly how much content it should generate. A prompt like, “Write a profile for the dog breed Siberian Husky” often takes a long time to process as the model generates a full multi-paragraph essay. If you specify “using three sentences”, it speeds up processing and generates a concise summary. Use phrases like “in a single sentence” or “in a few words” to shorten the generation time and produce shorter text.
// Generate a longer response for a specific command. let simple = "Write me a story about pears."
// Quickly generate a concise response. let quick = "Write the profile for the dog breed Siberian Husky using three sentences."
Instructions
help steer the model in a way that fits the use case of your app. The model obeys prompts at a lower priority than the instructions you provide. When you provide instructions to the model, consider specifying details like:
-
What the model’s role is; for example, “You are a mentor,” or “You are a movie critic”.
-
What the model should do, like “Help the person extract calendar events,” or “Help the person by recommending search suggestions”.
-
What the style preferences are, like “Respond as briefly as possible”.
-
What the possible safety measures are, like “Respond with ‘I can’t help with that’ if you’re asked to do something dangerous”.
Use content you trust in instructions because the model follows them more closely than the prompt itself. When you initialize a session with instructions, it affects all prompts the model responds to in that session. Instructions can also include example responses to help steer the model. When you add examples to your prompt, you provide the model with a template that shows the model what a good response looks like.
To call the model with a prompt, call respond(to:options:isolation:)
on your session. The response call is asynchronous because it may take a few seconds for Foundation Models to generate the response.
let instructions = """
Suggest five related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Instead of working with raw string output from the model, the framework offers guided generation to generate a custom Swift data structure you define. For more information about guided generation, see Generating Swift data structures with guided generation.
When you make a request to the model, you can provide custom tools to help the model complete the request. If the model determines that a Tool
can assist with the request, the framework calls your Tool
to perform additional actions like retrieving content from your local database. For more information about tool calling, see Expanding generation with tool calling
To get the best results for your prompt, experiment with different generation options. GenerationOptions
affects the runtime parameters of the model, and you can customize them for every request you make.
// Customize the temperature to increase creativity. let options = GenerationOptions(temperature: 2.0)
let session = LanguageModelSession()
let prompt = "Write me a story about coffee." let response = try await session.respond( to: prompt, options: options )
When you test apps that use the framework, use Xcode Instruments to understand more about the requests you make, like the time it takes to perform a request. When you make a request, you can access the Transcript
entries that describe the actions the model takes during your LanguageModelSession
.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models/#Generate-a-response
- Foundation Models
- Generating content and performing tasks with Foundation Models
Article
Enhance the experience in your app by prompting an on-device large language model.
The Foundation Models framework lets you tap into the on-device large models at the core of Apple Intelligence. You can enhance your app by using generative models to create content or perform tasks. The framework supports language understanding and generation based on models capabilities like text extraction and summarization that you can use to:
-
Generate a title, description, or tags for content
-
Generate a list of search suggestions relevant to your app
-
Transform product reviews into structured data you can visualize
-
Invoke your own tools to assist the model with performing app-specific tasks
Check model availability by creating an instance of SystemLanguageModel
with the default
property. The default
property provides the same model Apple Intelligence uses, and supports text generation.
Model availability depends on device factors like:
-
The device must support Apple Intelligence.
-
The device must have Apple Intelligence turned on in System Settings.
-
The device must have sufficient battery and not be in Game Mode.
Always verify model availability first, and plan for a fallback experience in case the model is unavailable.
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
After confirming that the model is available, create a LanguageModelSession
object to call the model. For a single-turn interaction, create a new session each time you call the model:
// Create a session with the system model. let session = LanguageModelSession()
For a multiturn interaction — where the model retains some knowledge of what it produced — reuse the same session each time you call the model.
A Prompt
is an input that the model responds to. Prompt engineering is the art of designing high-quality prompts so that the model generates a best possible response for the request you make. A prompt can be as short as “hello”, or as long as multiple paragraphs. The process of designing a prompt involves a lot of exploration to discover the best prompt, and involves optimizing prompt length and writing style.
When thinking about the prompt you want to use in your app, consider using conversational language in the form of a question or command. For example, “What’s a good month to visit Paris?” or “Generate a food truck menu.”
Write prompts that focus on a single and specific task, like “Write a profile for the dog breed Siberian Husky”. When a prompt is long and complicated, the model takes longer to respond, and may respond in unpredictable ways. If you have a complex generation task in mind, break the task down into a series of specific prompts.
You can refine your prompt by telling the model exactly how much content it should generate. A prompt like, “Write a profile for the dog breed Siberian Husky” often takes a long time to process as the model generates a full multi-paragraph essay. If you specify “using three sentences”, it speeds up processing and generates a concise summary. Use phrases like “in a single sentence” or “in a few words” to shorten the generation time and produce shorter text.
// Generate a longer response for a specific command. let simple = "Write me a story about pears."
// Quickly generate a concise response. let quick = "Write the profile for the dog breed Siberian Husky using three sentences."
Instructions
help steer the model in a way that fits the use case of your app. The model obeys prompts at a lower priority than the instructions you provide. When you provide instructions to the model, consider specifying details like:
-
What the model’s role is; for example, “You are a mentor,” or “You are a movie critic”.
-
What the model should do, like “Help the person extract calendar events,” or “Help the person by recommending search suggestions”.
-
What the style preferences are, like “Respond as briefly as possible”.
-
What the possible safety measures are, like “Respond with ‘I can’t help with that’ if you’re asked to do something dangerous”.
Use content you trust in instructions because the model follows them more closely than the prompt itself. When you initialize a session with instructions, it affects all prompts the model responds to in that session. Instructions can also include example responses to help steer the model. When you add examples to your prompt, you provide the model with a template that shows the model what a good response looks like.
To call the model with a prompt, call respond(to:options:isolation:)
on your session. The response call is asynchronous because it may take a few seconds for Foundation Models to generate the response.
let instructions = """
Suggest five related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Instead of working with raw string output from the model, the framework offers guided generation to generate a custom Swift data structure you define. For more information about guided generation, see Generating Swift data structures with guided generation.
When you make a request to the model, you can provide custom tools to help the model complete the request. If the model determines that a Tool
can assist with the request, the framework calls your Tool
to perform additional actions like retrieving content from your local database. For more information about tool calling, see Expanding generation with tool calling
To get the best results for your prompt, experiment with different generation options. GenerationOptions
affects the runtime parameters of the model, and you can customize them for every request you make.
// Customize the temperature to increase creativity. let options = GenerationOptions(temperature: 2.0)
let session = LanguageModelSession()
let prompt = "Write me a story about coffee." let response = try await session.respond( to: prompt, options: options )
When you test apps that use the framework, use Xcode Instruments to understand more about the requests you make, like the time it takes to perform a request. When you make a request, you can access the Transcript
entries that describe the actions the model takes during your LanguageModelSession
.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/languagemodelsession/respond(to:options:isolation:
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models/#Tune-generation-options-and-optimize-performance
- Foundation Models
- Generating content and performing tasks with Foundation Models
Article
Enhance the experience in your app by prompting an on-device large language model.
The Foundation Models framework lets you tap into the on-device large models at the core of Apple Intelligence. You can enhance your app by using generative models to create content or perform tasks. The framework supports language understanding and generation based on models capabilities like text extraction and summarization that you can use to:
-
Generate a title, description, or tags for content
-
Generate a list of search suggestions relevant to your app
-
Transform product reviews into structured data you can visualize
-
Invoke your own tools to assist the model with performing app-specific tasks
Check model availability by creating an instance of SystemLanguageModel
with the default
property. The default
property provides the same model Apple Intelligence uses, and supports text generation.
Model availability depends on device factors like:
-
The device must support Apple Intelligence.
-
The device must have Apple Intelligence turned on in System Settings.
-
The device must have sufficient battery and not be in Game Mode.
Always verify model availability first, and plan for a fallback experience in case the model is unavailable.
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
After confirming that the model is available, create a LanguageModelSession
object to call the model. For a single-turn interaction, create a new session each time you call the model:
// Create a session with the system model. let session = LanguageModelSession()
For a multiturn interaction — where the model retains some knowledge of what it produced — reuse the same session each time you call the model.
A Prompt
is an input that the model responds to. Prompt engineering is the art of designing high-quality prompts so that the model generates a best possible response for the request you make. A prompt can be as short as “hello”, or as long as multiple paragraphs. The process of designing a prompt involves a lot of exploration to discover the best prompt, and involves optimizing prompt length and writing style.
When thinking about the prompt you want to use in your app, consider using conversational language in the form of a question or command. For example, “What’s a good month to visit Paris?” or “Generate a food truck menu.”
Write prompts that focus on a single and specific task, like “Write a profile for the dog breed Siberian Husky”. When a prompt is long and complicated, the model takes longer to respond, and may respond in unpredictable ways. If you have a complex generation task in mind, break the task down into a series of specific prompts.
You can refine your prompt by telling the model exactly how much content it should generate. A prompt like, “Write a profile for the dog breed Siberian Husky” often takes a long time to process as the model generates a full multi-paragraph essay. If you specify “using three sentences”, it speeds up processing and generates a concise summary. Use phrases like “in a single sentence” or “in a few words” to shorten the generation time and produce shorter text.
// Generate a longer response for a specific command. let simple = "Write me a story about pears."
// Quickly generate a concise response. let quick = "Write the profile for the dog breed Siberian Husky using three sentences."
Instructions
help steer the model in a way that fits the use case of your app. The model obeys prompts at a lower priority than the instructions you provide. When you provide instructions to the model, consider specifying details like:
-
What the model’s role is; for example, “You are a mentor,” or “You are a movie critic”.
-
What the model should do, like “Help the person extract calendar events,” or “Help the person by recommending search suggestions”.
-
What the style preferences are, like “Respond as briefly as possible”.
-
What the possible safety measures are, like “Respond with ‘I can’t help with that’ if you’re asked to do something dangerous”.
Use content you trust in instructions because the model follows them more closely than the prompt itself. When you initialize a session with instructions, it affects all prompts the model responds to in that session. Instructions can also include example responses to help steer the model. When you add examples to your prompt, you provide the model with a template that shows the model what a good response looks like.
To call the model with a prompt, call respond(to:options:isolation:)
on your session. The response call is asynchronous because it may take a few seconds for Foundation Models to generate the response.
let instructions = """
Suggest five related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Instead of working with raw string output from the model, the framework offers guided generation to generate a custom Swift data structure you define. For more information about guided generation, see Generating Swift data structures with guided generation.
When you make a request to the model, you can provide custom tools to help the model complete the request. If the model determines that a Tool
can assist with the request, the framework calls your Tool
to perform additional actions like retrieving content from your local database. For more information about tool calling, see Expanding generation with tool calling
To get the best results for your prompt, experiment with different generation options. GenerationOptions
affects the runtime parameters of the model, and you can customize them for every request you make.
// Customize the temperature to increase creativity. let options = GenerationOptions(temperature: 2.0)
let session = LanguageModelSession()
let prompt = "Write me a story about coffee." let response = try await session.respond( to: prompt, options: options )
When you test apps that use the framework, use Xcode Instruments to understand more about the requests you make, like the time it takes to perform a request. When you make a request, you can access the Transcript
entries that describe the actions the model takes during your LanguageModelSession
.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models/#see-also
- Foundation Models
- Generating content and performing tasks with Foundation Models
Article
Enhance the experience in your app by prompting an on-device large language model.
The Foundation Models framework lets you tap into the on-device large models at the core of Apple Intelligence. You can enhance your app by using generative models to create content or perform tasks. The framework supports language understanding and generation based on models capabilities like text extraction and summarization that you can use to:
-
Generate a title, description, or tags for content
-
Generate a list of search suggestions relevant to your app
-
Transform product reviews into structured data you can visualize
-
Invoke your own tools to assist the model with performing app-specific tasks
Check model availability by creating an instance of SystemLanguageModel
with the default
property. The default
property provides the same model Apple Intelligence uses, and supports text generation.
Model availability depends on device factors like:
-
The device must support Apple Intelligence.
-
The device must have Apple Intelligence turned on in System Settings.
-
The device must have sufficient battery and not be in Game Mode.
Always verify model availability first, and plan for a fallback experience in case the model is unavailable.
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
After confirming that the model is available, create a LanguageModelSession
object to call the model. For a single-turn interaction, create a new session each time you call the model:
// Create a session with the system model. let session = LanguageModelSession()
For a multiturn interaction — where the model retains some knowledge of what it produced — reuse the same session each time you call the model.
A Prompt
is an input that the model responds to. Prompt engineering is the art of designing high-quality prompts so that the model generates a best possible response for the request you make. A prompt can be as short as “hello”, or as long as multiple paragraphs. The process of designing a prompt involves a lot of exploration to discover the best prompt, and involves optimizing prompt length and writing style.
When thinking about the prompt you want to use in your app, consider using conversational language in the form of a question or command. For example, “What’s a good month to visit Paris?” or “Generate a food truck menu.”
Write prompts that focus on a single and specific task, like “Write a profile for the dog breed Siberian Husky”. When a prompt is long and complicated, the model takes longer to respond, and may respond in unpredictable ways. If you have a complex generation task in mind, break the task down into a series of specific prompts.
You can refine your prompt by telling the model exactly how much content it should generate. A prompt like, “Write a profile for the dog breed Siberian Husky” often takes a long time to process as the model generates a full multi-paragraph essay. If you specify “using three sentences”, it speeds up processing and generates a concise summary. Use phrases like “in a single sentence” or “in a few words” to shorten the generation time and produce shorter text.
// Generate a longer response for a specific command. let simple = "Write me a story about pears."
// Quickly generate a concise response. let quick = "Write the profile for the dog breed Siberian Husky using three sentences."
Instructions
help steer the model in a way that fits the use case of your app. The model obeys prompts at a lower priority than the instructions you provide. When you provide instructions to the model, consider specifying details like:
-
What the model’s role is; for example, “You are a mentor,” or “You are a movie critic”.
-
What the model should do, like “Help the person extract calendar events,” or “Help the person by recommending search suggestions”.
-
What the style preferences are, like “Respond as briefly as possible”.
-
What the possible safety measures are, like “Respond with ‘I can’t help with that’ if you’re asked to do something dangerous”.
Use content you trust in instructions because the model follows them more closely than the prompt itself. When you initialize a session with instructions, it affects all prompts the model responds to in that session. Instructions can also include example responses to help steer the model. When you add examples to your prompt, you provide the model with a template that shows the model what a good response looks like.
To call the model with a prompt, call respond(to:options:isolation:)
on your session. The response call is asynchronous because it may take a few seconds for Foundation Models to generate the response.
let instructions = """
Suggest five related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Instead of working with raw string output from the model, the framework offers guided generation to generate a custom Swift data structure you define. For more information about guided generation, see Generating Swift data structures with guided generation.
When you make a request to the model, you can provide custom tools to help the model complete the request. If the model determines that a Tool
can assist with the request, the framework calls your Tool
to perform additional actions like retrieving content from your local database. For more information about tool calling, see Expanding generation with tool calling
To get the best results for your prompt, experiment with different generation options. GenerationOptions
affects the runtime parameters of the model, and you can customize them for every request you make.
// Customize the temperature to increase creativity. let options = GenerationOptions(temperature: 2.0)
let session = LanguageModelSession()
let prompt = "Write me a story about coffee." let response = try await session.respond( to: prompt, options: options )
When you test apps that use the framework, use Xcode Instruments to understand more about the requests you make, like the time it takes to perform a request. When you make a request, you can access the Transcript
entries that describe the actions the model takes during your LanguageModelSession
.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models/#Essentials
- Foundation Models
- Generating content and performing tasks with Foundation Models
Article
Enhance the experience in your app by prompting an on-device large language model.
The Foundation Models framework lets you tap into the on-device large models at the core of Apple Intelligence. You can enhance your app by using generative models to create content or perform tasks. The framework supports language understanding and generation based on models capabilities like text extraction and summarization that you can use to:
-
Generate a title, description, or tags for content
-
Generate a list of search suggestions relevant to your app
-
Transform product reviews into structured data you can visualize
-
Invoke your own tools to assist the model with performing app-specific tasks
Check model availability by creating an instance of SystemLanguageModel
with the default
property. The default
property provides the same model Apple Intelligence uses, and supports text generation.
Model availability depends on device factors like:
-
The device must support Apple Intelligence.
-
The device must have Apple Intelligence turned on in System Settings.
-
The device must have sufficient battery and not be in Game Mode.
Always verify model availability first, and plan for a fallback experience in case the model is unavailable.
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
After confirming that the model is available, create a LanguageModelSession
object to call the model. For a single-turn interaction, create a new session each time you call the model:
// Create a session with the system model. let session = LanguageModelSession()
For a multiturn interaction — where the model retains some knowledge of what it produced — reuse the same session each time you call the model.
A Prompt
is an input that the model responds to. Prompt engineering is the art of designing high-quality prompts so that the model generates a best possible response for the request you make. A prompt can be as short as “hello”, or as long as multiple paragraphs. The process of designing a prompt involves a lot of exploration to discover the best prompt, and involves optimizing prompt length and writing style.
When thinking about the prompt you want to use in your app, consider using conversational language in the form of a question or command. For example, “What’s a good month to visit Paris?” or “Generate a food truck menu.”
Write prompts that focus on a single and specific task, like “Write a profile for the dog breed Siberian Husky”. When a prompt is long and complicated, the model takes longer to respond, and may respond in unpredictable ways. If you have a complex generation task in mind, break the task down into a series of specific prompts.
You can refine your prompt by telling the model exactly how much content it should generate. A prompt like, “Write a profile for the dog breed Siberian Husky” often takes a long time to process as the model generates a full multi-paragraph essay. If you specify “using three sentences”, it speeds up processing and generates a concise summary. Use phrases like “in a single sentence” or “in a few words” to shorten the generation time and produce shorter text.
// Generate a longer response for a specific command. let simple = "Write me a story about pears."
// Quickly generate a concise response. let quick = "Write the profile for the dog breed Siberian Husky using three sentences."
Instructions
help steer the model in a way that fits the use case of your app. The model obeys prompts at a lower priority than the instructions you provide. When you provide instructions to the model, consider specifying details like:
-
What the model’s role is; for example, “You are a mentor,” or “You are a movie critic”.
-
What the model should do, like “Help the person extract calendar events,” or “Help the person by recommending search suggestions”.
-
What the style preferences are, like “Respond as briefly as possible”.
-
What the possible safety measures are, like “Respond with ‘I can’t help with that’ if you’re asked to do something dangerous”.
Use content you trust in instructions because the model follows them more closely than the prompt itself. When you initialize a session with instructions, it affects all prompts the model responds to in that session. Instructions can also include example responses to help steer the model. When you add examples to your prompt, you provide the model with a template that shows the model what a good response looks like.
To call the model with a prompt, call respond(to:options:isolation:)
on your session. The response call is asynchronous because it may take a few seconds for Foundation Models to generate the response.
let instructions = """
Suggest five related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Instead of working with raw string output from the model, the framework offers guided generation to generate a custom Swift data structure you define. For more information about guided generation, see Generating Swift data structures with guided generation.
When you make a request to the model, you can provide custom tools to help the model complete the request. If the model determines that a Tool
can assist with the request, the framework calls your Tool
to perform additional actions like retrieving content from your local database. For more information about tool calling, see Expanding generation with tool calling
To get the best results for your prompt, experiment with different generation options. GenerationOptions
affects the runtime parameters of the model, and you can customize them for every request you make.
// Customize the temperature to increase creativity. let options = GenerationOptions(temperature: 2.0)
let session = LanguageModelSession()
let prompt = "Write me a story about coffee." let response = try await session.respond( to: prompt, options: options )
When you test apps that use the framework, use Xcode Instruments to understand more about the requests you make, like the time it takes to perform a request. When you make a request, you can access the Transcript
entries that describe the actions the model takes during your LanguageModelSession
.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
https://developer.apple.com/documentation/foundationmodels/generating-content-and-performing-tasks-with-foundation-models#app-main)
- Foundation Models
- Generating content and performing tasks with Foundation Models
Article
Enhance the experience in your app by prompting an on-device large language model.
The Foundation Models framework lets you tap into the on-device large models at the core of Apple Intelligence. You can enhance your app by using generative models to create content or perform tasks. The framework supports language understanding and generation based on models capabilities like text extraction and summarization that you can use to:
-
Generate a title, description, or tags for content
-
Generate a list of search suggestions relevant to your app
-
Transform product reviews into structured data you can visualize
-
Invoke your own tools to assist the model with performing app-specific tasks
Check model availability by creating an instance of SystemLanguageModel
with the default
property. The default
property provides the same model Apple Intelligence uses, and supports text generation.
Model availability depends on device factors like:
-
The device must support Apple Intelligence.
-
The device must have Apple Intelligence turned on in System Settings.
-
The device must have sufficient battery and not be in Game Mode.
Always verify model availability first, and plan for a fallback experience in case the model is unavailable.
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
After confirming that the model is available, create a LanguageModelSession
object to call the model. For a single-turn interaction, create a new session each time you call the model:
// Create a session with the system model. let session = LanguageModelSession()
For a multiturn interaction — where the model retains some knowledge of what it produced — reuse the same session each time you call the model.
A Prompt
is an input that the model responds to. Prompt engineering is the art of designing high-quality prompts so that the model generates a best possible response for the request you make. A prompt can be as short as “hello”, or as long as multiple paragraphs. The process of designing a prompt involves a lot of exploration to discover the best prompt, and involves optimizing prompt length and writing style.
When thinking about the prompt you want to use in your app, consider using conversational language in the form of a question or command. For example, “What’s a good month to visit Paris?” or “Generate a food truck menu.”
Write prompts that focus on a single and specific task, like “Write a profile for the dog breed Siberian Husky”. When a prompt is long and complicated, the model takes longer to respond, and may respond in unpredictable ways. If you have a complex generation task in mind, break the task down into a series of specific prompts.
You can refine your prompt by telling the model exactly how much content it should generate. A prompt like, “Write a profile for the dog breed Siberian Husky” often takes a long time to process as the model generates a full multi-paragraph essay. If you specify “using three sentences”, it speeds up processing and generates a concise summary. Use phrases like “in a single sentence” or “in a few words” to shorten the generation time and produce shorter text.
// Generate a longer response for a specific command. let simple = "Write me a story about pears."
// Quickly generate a concise response. let quick = "Write the profile for the dog breed Siberian Husky using three sentences."
Instructions
help steer the model in a way that fits the use case of your app. The model obeys prompts at a lower priority than the instructions you provide. When you provide instructions to the model, consider specifying details like:
-
What the model’s role is; for example, “You are a mentor,” or “You are a movie critic”.
-
What the model should do, like “Help the person extract calendar events,” or “Help the person by recommending search suggestions”.
-
What the style preferences are, like “Respond as briefly as possible”.
-
What the possible safety measures are, like “Respond with ‘I can’t help with that’ if you’re asked to do something dangerous”.
Use content you trust in instructions because the model follows them more closely than the prompt itself. When you initialize a session with instructions, it affects all prompts the model responds to in that session. Instructions can also include example responses to help steer the model. When you add examples to your prompt, you provide the model with a template that shows the model what a good response looks like.
To call the model with a prompt, call respond(to:options:isolation:)
on your session. The response call is asynchronous because it may take a few seconds for Foundation Models to generate the response.
let instructions = """
Suggest five related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Instead of working with raw string output from the model, the framework offers guided generation to generate a custom Swift data structure you define. For more information about guided generation, see Generating Swift data structures with guided generation.
When you make a request to the model, you can provide custom tools to help the model complete the request. If the model determines that a Tool
can assist with the request, the framework calls your Tool
to perform additional actions like retrieving content from your local database. For more information about tool calling, see Expanding generation with tool calling
To get the best results for your prompt, experiment with different generation options. GenerationOptions
affects the runtime parameters of the model, and you can customize them for every request you make.
// Customize the temperature to increase creativity. let options = GenerationOptions(temperature: 2.0)
let session = LanguageModelSession()
let prompt = "Write me a story about coffee." let response = try await session.respond( to: prompt, options: options )
When you test apps that use the framework, use Xcode Instruments to understand more about the requests you make, like the time it takes to perform a request. When you make a request, you can access the Transcript
entries that describe the actions the model takes during your LanguageModelSession
.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
class SystemLanguageModel
An on-device large language model capable of text generation tasks.
Beta
struct UseCase
A type that represents the use case for prompting.
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/languagemodelsession/respond(to:options:isolation:)-4z2jz)
-4z2jz)#app-main)
Search developer.apple.comSearch Icon
.#app-main)
Search developer.apple.comSearch Icon
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/availability-swift.enum
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.Availability Beta
Enumeration
The availability status for a specific system language model.
@frozen enum Availability
case available
The system is ready for making requests.
case unavailable(SystemLanguageModel.Availability.UnavailableReason)
Indicates that the system is not ready for requests.
enum UnavailableReason
The unavailable reason.
Returns a Boolean value indicating whether two values are equal.
Equatable
Sendable
SendableMetatype
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
Beta
var availability: SystemLanguageModel.Availability
The availability of the language model.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/#Loading-the-model-with-a-use-case
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/#Loading-the-model-with-an-adapter
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Search developer.apple.comSearch Icon
- Foundation Models
- SystemLanguageModel
- SystemLanguageModel.Adapter Beta
Structure
Specializes the system language model for custom use cases.
struct Adapter
Use the base system model for most prompt engineering, guided generation, and tools. If you need to specialize the model, train a custom Adapter
to alter the system model weights and optimize it for your custom task. Use custom adapters only if you’re comfortable training foundation models in Python.
For more on custom adapters, see adapter training toolkit.
init(fileURL: URL) throws
Creates an adapter from the file URL.
init(name: String) throws
Creates an adapter downloaded from the background assets framework.
let creatorDefinedMetadata: [String : Any]
Values read from the creator defined field of the adapter’s metadata.
static func removeObsoleteAdapters()
Remove all obsolete adapters that are no longer compatible with current system models.
Get all compatible adapter identifiers compatible with current system models.
Returns true when an asset pack is an Foundation Models Adapter and compatible with current system base model.
enum AssetError
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/#Checking-model-availability
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel
- isAvailable Beta
Instance Property
A convenience getter to check if the system is entirely ready.
final var isAvailable: Bool { get }
var availability: SystemLanguageModel.Availability
The availability of the language model.
Beta
enum Availability
The availability status for a specific system language model.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/availability-swift.property
- Foundation Models
- SystemLanguageModel
- availability Beta
Instance Property
The availability of the language model.
final var availability: SystemLanguageModel.Availability { get }
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
Beta
enum Availability
The availability status for a specific system language model.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/#Getting-the-default-model
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel
- supportedLanguages Beta
Instance Property
Languages supported by the model.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- SystemLanguageModel Beta
Class
An on-device large language model capable of text generation tasks.
final class SystemLanguageModel
Generating content and performing tasks with Foundation Models
The SystemLanguageModel
refers to the on-device text foundation model that powers Apple Intelligence. Use default
to access the base version of the model and perform general-purpose text generation tasks. To access a specialized version of the model, initialize the model with SystemLanguageModel.UseCase
to perform tasks like contentTagging
.
Verify the model availability before you use the model. Model availability depends on device factors like:
-
The device must support Apple Intelligence
-
Apple Intelligence must be turned on in System Settings
-
The device must have sufficient battery
-
The device cannot be in Game Mode
Use SystemLanguageModel.Availability
to change what your app shows to people based on the availability condition:
struct GenerativeView: View { // Create a reference to the system language model. private var model = SystemLanguageModel.default
var body: some View { switch model.availability { case .available: // Show your intelligence UI. case .unavailable(.deviceNotEligible): // Show an alternative UI. case .unavailable(.appleIntelligenceNotEnabled): // Ask the person to turn on Apple Intelligence. case .unavailable(.modelNotReady): // The model isn't ready because it's downloading or because of other system reasons. case .unavailable(let other): // The model is unavailable for an unknown reason. } } }
convenience init(useCase: SystemLanguageModel.UseCase)
Creates a system language model for a specific use case.
struct UseCase
A type that represents the use case for prompting.
convenience init(adapter: SystemLanguageModel.Adapter)
Creates the base version of the model with an adapter.
struct Adapter
Specializes the system language model for custom use cases.
var isAvailable: Bool
A convenience getter to check if the system is entirely ready.
var availability: SystemLanguageModel.Availability
The availability of the language model.
enum Availability
The availability status for a specific system language model.
static let `default`: SystemLanguageModel
The base version of the model.
Languages supported by the model.
Copyable
Observable
Sendable
SendableMetatype
Enhance the experience in your app by prompting an on-device large language model.
Improving safety from generative model output
Create generative experiences that appropriately handle sensitive inputs and respect people.
Adding intelligent app features with generative models
Build robust apps with guided generation and tool calling by adopting the Foundation Models framework.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/contenttagging).
.#app-main)
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/availability-swift.enum)
Search developer.apple.comSearch Icon
)#app-main)
Search developer.apple.comSearch Icon
)#app-main)
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/availability-swift.property)
Search developer.apple.comSearch Icon
Search developer.apple.comSearch Icon
- Foundation Models
- Instructions Beta
Structure
Instructions define the model’s intended behavior on prompts.
struct Instructions
Generating content and performing tasks with Foundation Models
Improving safety from generative model output
Instructions are typically provided by you to define the role and behavior of the model. In the code below, the instructions specify that the model replies with topics rather than, for example, a recipe:
let instructions = """
Suggest related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Apple trains the model to obey instructions over any commands it receives in prompts, so don’t include untrusted content in instructions. For more on how instructions impact generation quality and safety, see Improving safety from generative model output.
init(_:)
struct InstructionsBuilder
A type that represents an instructions builder.
protocol InstructionsRepresentable
Conforming types represent instructions.
Copyable
InstructionsRepresentable
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Instructions Beta
Structure
Instructions define the model’s intended behavior on prompts.
struct Instructions
Generating content and performing tasks with Foundation Models
Improving safety from generative model output
Instructions are typically provided by you to define the role and behavior of the model. In the code below, the instructions specify that the model replies with topics rather than, for example, a recipe:
let instructions = """
Suggest related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Apple trains the model to obey instructions over any commands it receives in prompts, so don’t include untrusted content in instructions. For more on how instructions impact generation quality and safety, see Improving safety from generative model output.
init(_:)
struct InstructionsBuilder
A type that represents an instructions builder.
protocol InstructionsRepresentable
Conforming types represent instructions.
Copyable
InstructionsRepresentable
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Instructions Beta
Structure
Instructions define the model’s intended behavior on prompts.
struct Instructions
Generating content and performing tasks with Foundation Models
Improving safety from generative model output
Instructions are typically provided by you to define the role and behavior of the model. In the code below, the instructions specify that the model replies with topics rather than, for example, a recipe:
let instructions = """
Suggest related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Apple trains the model to obey instructions over any commands it receives in prompts, so don’t include untrusted content in instructions. For more on how instructions impact generation quality and safety, see Improving safety from generative model output.
init(_:)
struct InstructionsBuilder
A type that represents an instructions builder.
protocol InstructionsRepresentable
Conforming types represent instructions.
Copyable
InstructionsRepresentable
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Instructions Beta
Structure
Instructions define the model’s intended behavior on prompts.
struct Instructions
Generating content and performing tasks with Foundation Models
Improving safety from generative model output
Instructions are typically provided by you to define the role and behavior of the model. In the code below, the instructions specify that the model replies with topics rather than, for example, a recipe:
let instructions = """
Suggest related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Apple trains the model to obey instructions over any commands it receives in prompts, so don’t include untrusted content in instructions. For more on how instructions impact generation quality and safety, see Improving safety from generative model output.
init(_:)
struct InstructionsBuilder
A type that represents an instructions builder.
protocol InstructionsRepresentable
Conforming types represent instructions.
Copyable
InstructionsRepresentable
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Instructions Beta
Structure
Instructions define the model’s intended behavior on prompts.
struct Instructions
Generating content and performing tasks with Foundation Models
Improving safety from generative model output
Instructions are typically provided by you to define the role and behavior of the model. In the code below, the instructions specify that the model replies with topics rather than, for example, a recipe:
let instructions = """
Suggest related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Apple trains the model to obey instructions over any commands it receives in prompts, so don’t include untrusted content in instructions. For more on how instructions impact generation quality and safety, see Improving safety from generative model output.
init(_:)
struct InstructionsBuilder
A type that represents an instructions builder.
protocol InstructionsRepresentable
Conforming types represent instructions.
Copyable
InstructionsRepresentable
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
Search developer.apple.comSearch Icon
- Foundation Models
- InstructionsBuilder Beta
Structure
A type that represents an instructions builder.
@resultBuilder struct InstructionsBuilder
Creates a builder with the an array of prompts.
Creates a builder with the a block.
Creates a builder with the first component.
Creates a builder with the second component.
static buildExpression(_:)
Creates a builder with a prompt expression.
Creates a builder with a limited availability prompt.
Creates a builder with an optional component.
init(_:)
Beta
protocol InstructionsRepresentable
Conforming types represent instructions.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- InstructionsRepresentable Beta
Protocol
Conforming types represent instructions.
protocol InstructionsRepresentable
var instructionsRepresentation: Instructions
An instance that represents the instructions.
Required Default implementation provided.
ConvertibleToGeneratedContent
Generable
GeneratedContent
Instructions
init(_:)
Beta
struct InstructionsBuilder
A type that represents an instructions builder.
Beta
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Instructions Beta
Structure
Instructions define the model’s intended behavior on prompts.
struct Instructions
Generating content and performing tasks with Foundation Models
Improving safety from generative model output
Instructions are typically provided by you to define the role and behavior of the model. In the code below, the instructions specify that the model replies with topics rather than, for example, a recipe:
let instructions = """
Suggest related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Apple trains the model to obey instructions over any commands it receives in prompts, so don’t include untrusted content in instructions. For more on how instructions impact generation quality and safety, see Improving safety from generative model output.
init(_:)
struct InstructionsBuilder
A type that represents an instructions builder.
protocol InstructionsRepresentable
Conforming types represent instructions.
Copyable
InstructionsRepresentable
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/instructions/instructionsrepresentable-implementations
Collection
- Foundation Models
- Instructions
- InstructionsRepresentable Implementations
API Collection
var instructionsRepresentation: Instructions
An instance that represents the instructions.
Beta
- Foundation Models
- Instructions Beta
Structure
Instructions define the model’s intended behavior on prompts.
struct Instructions
Generating content and performing tasks with Foundation Models
Improving safety from generative model output
Instructions are typically provided by you to define the role and behavior of the model. In the code below, the instructions specify that the model replies with topics rather than, for example, a recipe:
let instructions = """
Suggest related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Apple trains the model to obey instructions over any commands it receives in prompts, so don’t include untrusted content in instructions. For more on how instructions impact generation quality and safety, see Improving safety from generative model output.
init(_:)
struct InstructionsBuilder
A type that represents an instructions builder.
protocol InstructionsRepresentable
Conforming types represent instructions.
Copyable
InstructionsRepresentable
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Instructions Beta
Structure
Instructions define the model’s intended behavior on prompts.
struct Instructions
Generating content and performing tasks with Foundation Models
Improving safety from generative model output
Instructions are typically provided by you to define the role and behavior of the model. In the code below, the instructions specify that the model replies with topics rather than, for example, a recipe:
let instructions = """
Suggest related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Apple trains the model to obey instructions over any commands it receives in prompts, so don’t include untrusted content in instructions. For more on how instructions impact generation quality and safety, see Improving safety from generative model output.
init(_:)
struct InstructionsBuilder
A type that represents an instructions builder.
protocol InstructionsRepresentable
Conforming types represent instructions.
Copyable
InstructionsRepresentable
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Instructions Beta
Structure
Instructions define the model’s intended behavior on prompts.
struct Instructions
Generating content and performing tasks with Foundation Models
Improving safety from generative model output
Instructions are typically provided by you to define the role and behavior of the model. In the code below, the instructions specify that the model replies with topics rather than, for example, a recipe:
let instructions = """
Suggest related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Apple trains the model to obey instructions over any commands it receives in prompts, so don’t include untrusted content in instructions. For more on how instructions impact generation quality and safety, see Improving safety from generative model output.
init(_:)
struct InstructionsBuilder
A type that represents an instructions builder.
protocol InstructionsRepresentable
Conforming types represent instructions.
Copyable
InstructionsRepresentable
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Instructions Beta
Structure
Instructions define the model’s intended behavior on prompts.
struct Instructions
Generating content and performing tasks with Foundation Models
Improving safety from generative model output
Instructions are typically provided by you to define the role and behavior of the model. In the code below, the instructions specify that the model replies with topics rather than, for example, a recipe:
let instructions = """
Suggest related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Apple trains the model to obey instructions over any commands it receives in prompts, so don’t include untrusted content in instructions. For more on how instructions impact generation quality and safety, see Improving safety from generative model output.
init(_:)
struct InstructionsBuilder
A type that represents an instructions builder.
protocol InstructionsRepresentable
Conforming types represent instructions.
Copyable
InstructionsRepresentable
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
- Foundation Models
- Instructions Beta
Structure
Instructions define the model’s intended behavior on prompts.
struct Instructions
Generating content and performing tasks with Foundation Models
Improving safety from generative model output
Instructions are typically provided by you to define the role and behavior of the model. In the code below, the instructions specify that the model replies with topics rather than, for example, a recipe:
let instructions = """
Suggest related topics. Keep them concise (three to seven words) and make sure they
build naturally from the person's topic.
"""
let session = LanguageModelSession(instructions: instructions)
let prompt = "Making homemade bread" let response = try await session.respond(to: prompt)
Apple trains the model to obey instructions over any commands it receives in prompts, so don’t include untrusted content in instructions. For more on how instructions impact generation quality and safety, see Improving safety from generative model output.
init(_:)
struct InstructionsBuilder
A type that represents an instructions builder.
protocol InstructionsRepresentable
Conforming types represent instructions.
Copyable
InstructionsRepresentable
class LanguageModelSession
An object that represents a session that interacts with a large language model.
Beta
struct Prompt
A prompt from a person to the model.
struct Transcript
A transcript that documents interactions with a language model.
struct GenerationOptions
Options that control how the model generates its response to a prompt.
Beta Software
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
Learn more about using Apple's beta software
https://developer.apple.com/documentation/foundationmodels/improving-safety-from-generative-model-output).
.#app-main)
Search developer.apple.comSearch Icon
)#app-main)
Search developer.apple.comSearch Icon