Skip to content

Instantly share code, notes, and snippets.

@tkersey
Last active November 16, 2024 16:16
Show Gist options
  • Save tkersey/e4d9923922d80c065f9d to your computer and use it in GitHub Desktop.
Save tkersey/e4d9923922d80c065f9d to your computer and use it in GitHub Desktop.
For future reference but maybe not.

Quick Access

2024

November

  • Increasingly, the cyberinfrastructure of mathematics and mathematics education is built using GitHub to organize projects, courses, and their communities. The goal of this book is to help readers learn the basic features of GitHub available using only a web browser, and how to use these features to participate in GitHub-hosted mathematical projects with colleagues and/or students.

  • The first post of this series explored how to take advantage of Dev Containers, a useful VS Code feature that enables to run and debug Swift command line or server apps in a Linux container.

    In this post you will take it a step further by having a more complex scenario: instead of storing the todos temporarily in memory, the app will store them in a PostgreSQL database. And to test it locally, you won’t need to install and run a Postgres server directly in your machine.

  • Say it with me now: If you’re trying to do more than one thing at once, something is waiting. And it just might be the most important thing.

  • The UIKit combination of UICollectionView and the UICollectionViewFlowLayout gives a lot of flexibility and control to build grid-like flow layouts. How do we do that with SwiftUI?

  • Everything You Need to Know About Live Activities and Dynamic Island in iOS

    With the release of iOS 16, Apple introduced Live Activities, and later with iPhone 14 Pro, the Dynamic Island—two powerful tools that allow us to present real-time, glanceable updates directly on the Lock Screen and at the top of the screen on the Dynamic Island. These features are designed to keep users informed about ongoing activities, like delivery tracking, live sports scores, or wait times, without requiring them to unlock their devices or open the app.

    In this two-part guide, we’ll discuss everything you need to know to integrate Live Activities and Dynamic Island effectively in your iOS app. We'll detail each step from understanding design constraints to setting up a Live Activity, handling updates, and adding interactions.

  • Tools, docs, and sample code to develop applications on the AWS cloud

  • While everyone who writes Swift code will use Swift Macros, not everyone should write their own Swift Macros. This book will help you determine whether writing Swift Macros is for you and show you how the best ways to make your own.

    You'll create both freestanding and attached macros and get a feel for when you should and shouldn't create them, which sort of macro you should create, and how to use SwiftSyntax to implement them. Your macros will accept parameters when appropriate and will always include tests. You'll even learn to create helpful diagnostics for your macros and even FixIts.

  • It’s like an invisible world that always surrounds us, and allows us to do many amazing things: It’s how radio and TV are transmitted, it’s how we communicate using Wi-Fi or our phones. And there are many more things to discover there, from all over the world.

    In this post, I’ll show you fifty things you can find there — all you need is this simple USB dongle and an antenna kit!

  • Backports the Swift 6 type Mutex to Swift 5 and all Darwin platforms via OSAllocatedUnfairLock.

  • A trace trap or invalid CPU instruction interrupted the process, often because the process violated a requirement or timeout.

    A trace trap gives an attached debugger the chance to interrupt the process at a specific point in its execution. On ARM processors, this appears as EXC_BREAKPOINT (SIGTRAP). On x86_64 processors, this appears as EXC_BAD_INSTRUCTION (SIGILL).

    The Swift runtime uses trace traps for specific types of unrecoverable errors — see Addressing crashes from Swift runtime errors for information on those errors. Some lower-level libraries, such as Dispatch, trap the process with this exception upon encountering an unrecoverable error, and log additional information about the error in the Additional Diagnostic Information section of the crash report. See Diagnostic messages for information about those messages.

    If you want to use the same technique in your own code for unrecoverable errors, call the fatalError(\_:file:line:) function in Swift, or the __builtin_trap() function in C. These functions allow the system to generate a crash report with thread backtraces that show how you reached the unrecoverable error.

    An illegal CPU instruction means the program’s executable contains an instruction that the processor doesn’t implement or can’t execute.

  • Sudden app crashes are a source of bad user experience and app review rejections. Learn how crash logs can be analyzed, what information they contain and how to diagnose the causes of crashes, including hard-to-reproduce memory corruptions and multithreading issues.

  • This is a really promising development. 32GB is just small enough that I can run the model on my Mac without having to quit every other application I’m running, and both the speed and the quality of the results feel genuinely competitive with the current best of the hosted models.

    Given that code assistance is probably around 80% of my LLM usage at the moment this is a meaningfully useful release for how I engage with this class of technology.

  •  for platform in \
     "$(PLATFORM_IOS)" \
     "$(PLATFORM_MACOS)" \
     "$(PLATFORM_MAC_CATALYST)" \
     "$(PLATFORM_TVOS)" \
     "$(PLATFORM_WATCHOS)"; \
    do \
    	xcrun xcodebuild build \
      	-workspace Dependencies.xcworkspace \
      	-scheme Dependencies \
      	-configuration $(CONFIG) \
      	-destination platform="$$platform" || exit 1; \
    done;
  • Orka provides on-demand macOS environments to power everything from simple Xcode builds to fully integrated, complex automated CI/CD pipelines.

  • A value that has a custom representation in AnyHashable.

  • As you can see, it is quite simple to inadvertently extend the lifetime of objects with long-running async functions.

  • In 2021 we got a new Foundation type that represents a string with attributes: AttributedString. Attributes on ranges of the string can represent visual styles, accessibility features, link data and more. In contrast with the old NSAttributedString, new AttributedString provides type-safe API, which means you can't assign a wrong type to an attribute by mistake.

    AttributedString can be used in a variety of contexts and its attributes are defined in separate collections nested under AttributeScopes. System frameworks such as Foundation, UIKit, AppKit and SwiftUI define their own scopes.

  • Server side Swift has been available since end of 2015. The idea was behind the development that you can use the same language for RESTful APIs, desktop and mobile applications. With the evolution of the Swift language, the different Swift web frameworks got more robust and complex.

    That’s why I was happy to read Tib’s excellent article about a new HTTP server library written in Swift, Hummingbird. I immediately liked the concept of modularity, so decided to create a tutorial to show its simplicity.

  • Head’s up: this post is a technical deep dive into the code of DocC, the Swift language documentation system. Not that my content doesn’t tend to be heavily technical, but this goes even further than usual.

  • This paper is dedicated to the hope that someone with power to act will one day see that contemporary research on education is like the following experiment by a nineteenth century engineer who worked to demonstrate that engines were better than horses. This he did by hitching a 1/8 HP motor in parallel with his team of four strong stallions. After a year of statistical research he announced a significant difference. However, it was generally thought that there was a Hawthorne effect on the horses.

  • Select the best method of scheduling background runtime for your app. If your app needs computing resources to complete tasks when it’s not running in the foreground, you can select from many strategies to obtain background runtime. Selecting the right strategies for your app depends on how it functions in the background.

    Some apps perform work for a short time while in the foreground and must continue uninterrupted if they go to the background. Other apps defer that work to perform in the background at a later time or even at night while the device charges. Some apps need background processing time at varied and unpredictable times, such as when an external event or message arrives.

    Apps involved in health research studies can obtain background runtime to process data essential for the study. Apps can also request to launch in the background for studies in which the user participates.

    Select one or more methods for your app based on how you schedule activity in the background.

  • Refreshing and Maintaining Your App Using Background Tasks

  • Improve Rosetta performance by adding support for the total store ordering (TSO) memory model to your Linux kernel.

    Rosetta is a translation process that makes it possible to run apps that contain x86_64 instructions on Apple silicon. In macOS, Rosetta allows apps built for Intel-based Mac computers to run seamlessly on Apple silicon, and enables the same capability for Intel Linux apps in ARM Linux VMs.

    Rosetta enables Linux distributions running on Apple silicon to support legacy Intel binaries with the addition of a few lines of code in your virtualization-enabled app, and the creation of a directory share for Rosetta to use.

  • To make it possible to refer to the above two ImageLoader implementations using dot syntax, all that we have to do is to define a type-constrained extension for each one — which in turn contains a static API for creating an instance of that type.

  • Learn about the fundamental concepts Swift uses to enable data-race-free concurrent code.

    Traditionally, mutable state had to be manually protected via careful runtime synchronization. Using tools such as locks and queues, the prevention of data races was entirely up to the programmer. This is notoriously difficult not just to do correctly, but also to keep correct over time. Even determining the need for synchronization may be challenging. Worst of all, unsafe code does not guarantee failure at runtime. This code can often seem to work, possibly because highly unusual conditions are required to exhibit the incorrect and unpredictable behavior characteristic of a data race.

    More formally, a data race occurs when one thread accesses memory while the same memory is being mutated by another thread. The Swift 6 language mode eliminates these problems by preventing data races at compile time.

  • If you’ve been in Swift ecosystem for many years then you at least encountered this error once: “Protocol ‘XX’ can only be used as a generic constraint because it has Self or associated type requirements”. Maybe you even had nightmares about it 👻!

    It’s indeed one of the most common issue developers face while learning the language. And until not so long ago, it was impossible to “fix”: you had to rethink your code by avoiding casting an object to an existential.

    Thankfully this time is now over! Let’s acclaim our saviour: any. We’ll dive into a real usecase (comparing two Equatable objects) to understand how it can be used to solve our issues.

  • SwiftUI lets us style portions of text by interpolating Text inside another Text and applying available text modifiers, such as foregroundColor() or font().

    Starting from iOS 17 we can apply more intricate styling to ranges within a Text view with foregroundStyle().

  • A Metal shader library.

October

  • Modify the payload of a remote notification before it’s displayed on the user’s iOS device.

    You may want to modify the content of a remote notification on a user’s iOS device if you need to:

    • Decrypt data sent in an encrypted format.
    • Download images or other media attachments whose size would exceed the maximum payload size.
    • Update the notification’s content, perhaps by incorporating data from the user’s device.

    Modifying a remote notification requires a notification service app extension, which you include inside your iOS app bundle. The app extension receives the contents of your remote notifications before the system displays them to the user, giving you time to update the notification payload. You control which notifications your extension handles.

  • A case study of gradually modernizing an established mobile application

    Incremental replacement of a legacy mobile application is a challenging concept to articulate and execute. However, we believe by making the investment in the pre-requisites of legacy modernization, it is posible to yield benefits in the long term. This article explores the Strangler Fig pattern and how it can be applied to mobile applications. We chart the journey of an enterprise who refused to accept the high cost and risk associated with a full rewrite of their mobile application. By incrementally developing their new mobile app alongside a modular architecture, they were able to achieve significant uplifts in their delivery metrics.

  • Understand the structure and properties of the objects the system includes in the JSON of a crash report

    Starting with iOS 15 and macOS 12, apps that generate crash reports store the data as JSON in files with an .ips extension. Tools for viewing these files, such as Console, translate the JSON to make it easier to read and interpret. The translated content uses field names the article Examining the fields in a crash report describes. Use the following information to understand the structure of the JSON the system uses for these crash reports and how the data maps to the field names found in the translated content.

    Typical JSON parsers expect a single JSON object in the body of the file. The IPS file for a crash report contains two JSON objects: an object containing IPS metadata about the report incident and an object containing the crash report data. When parsing the file, extract the JSON for the metadata object from the first line. If the bug_type property of the metadata object is 309, the log type for crash reports, you can extract the JSON for the crash report data from the remainder of the text.

  • Identify the signs of a Swift runtime error, and address the crashes runtime errors cause.

    Swift uses memory safety techniques to catch programming mistakes early. Optionals require you to think about how best to handle a nil value. Type safety prevents casting an object to a type that doesn’t match the object’s actual type.

    If you use the ! operator to force unwrap an optional value that’s nil, or if you force a type downcast that fails with the as! operator, the Swift runtime catches these errors and intentionally crashes the app. If you can reproduce the runtime error, Xcode logs information about the issue to the console.

  • Connect your app and a website to provide both a native app and a browser experience.

    Associated domains establish a secure association between domains and your app so you can share credentials or provide features in your app from your website. For example, an online retailer may offer an app to accompany their website and enhance the user experience.

    Shared web credentials, universal links, Handoff, and App Clips all use associated domains. Associated domains provide the underpinning to universal links, a feature that allows an app to present content in place of all or part of its website. Users who don’t download the app get the same information in a web browser instead of the native app.

    To associate a website with your app, you need to have the associated domain file on your website and the appropriate entitlement in your app. The apps in the apple-app-site-association file on your website must have a matching Associated Domains Entitlement.

  • Convert XCTests to Swift Testing

    Testpiler is an app that allows you to easily convert unit tests written in Swift from XCTest to the new Swift Testing framework. Simply add a folder containing your unit tests, or add individual test files. You can preview a diff of the proposed changes so you know exactly what will happen. When you're ready, you can convert each source file individually, or convert all selected files in a batch.

  • Read the whole formal grammar.

  • You can decrease noise for your team by limiting notifications when your team is requested to review a pull request.

  • A new frontier for AI privacy in the cloud.

    Private Cloud Compute (PCC) delivers groundbreaking privacy and security protections to support computationally intensive requests for Apple Intelligence by bringing our industry-leading device security model into the cloud. Whenever possible, Apple Intelligence processes tasks locally on device, but more sophisticated tasks require additional processing power to execute more complex foundation models in the cloud. Private Cloud Compute makes it possible for users to take advantage of such models without sacrificing the security and privacy that they expect from Apple devices.

    We designed Private Cloud Compute with core requirements that go beyond traditional models of cloud AI security:

    • Stateless computation on personal user data: PCC must use the personal user data that it receives exclusively for the purpose of fulfilling the user’s request. User data must not be accessible after the response is returned to the user.
    • Enforceable guarantees: It must be possible to constrain and analyze all the components that critically contribute to the guarantees of the overall PCC system.
    • No privileged runtime access: PCC must not contain privileged interfaces that might enable Apple site reliability staff to bypass PCC privacy guarantees.
    • Non-targetability: An attacker should not be able to attempt to compromise personal data that belongs to specific, targeted PCC users without attempting a broad compromise of the entire PCC system.
    • Verifiable transparency: Security researchers need to be able to verify, with a high degree of confidence, that our privacy and security guarantees for PCC match our public promises.

    This guide is designed to walk you through these requirements and provide the resources you need to verify them for yourself, including a comprehensive look at the technical design of PCC and the specific implementation details needed to validate it.

  • Discover SwiftUI like never before with Companion for SwiftUI—an interactive documentation hub covering all SwiftUI elements, whether you’re developing for iOS, macOS, tvOS, or watchOS. The latest update features the complete set of additions from WWDC 2024, bringing its repository to over 3300 entries. Here are some of its key features:

    • ✔️ Comprehensive Coverage: Get in-depth insights into SwiftUI views, shapes, protocols, scenes, styles, property wrappers, and environment values across all Apple platforms (iOS, macOS, tvOS, and watchOS).
    • ✔️ Comprehensive Coverage: Get in-depth insights into SwiftUI views, shapes, protocols, scenes, styles, property wrappers, and environment values across all Apple platforms (iOS, macOS, tvOS, and watchOS).
    • ✔️ Interactive Examples: Dive into interactive examples that run within the app. Adjust associated controls to witness immediate changes in views and code, facilitating a better understanding of SwiftUI’s power.
    • ✔️ Seamless Integration: Copy and paste examples directly into Xcode for quick and easy execution. Examples are ready to run, making your development process smoother.
    • ✔️ Filtering Options: Tailor your learning experience by creating filters to focus on relevant API areas, whether you’re working on a legacy project, exploring the latest WWDC ’23 additions, or researching SwiftUI’s implementation of a specific framework. Switch between multiple tabs, each with its own filter.
    • ✔️ Visual Learning: Need to grasp the finer details of a quad curve? No worries! Explore the .addQuadCurve() entry, and drag curve points to instantly visualize how function parameters change. Accelerate your learning curve with instant, hands-on knowledge.
    • ✔️ Menu Bar Icon: Quickly find topics using the system’s menu bar icon, allowing you to jump directly to the page you’re looking for.
  • A guide on everything related to Cursor for Apple Platforms development

  • Recently, there’s been much talk and fuss about AI, and whether or not it can improve your development workflow. I wanted to touch base about how AI and its implementation in Cursor have been significantly improving my speed and efficiency.

  • Build iOS/Swift apps using Visual Studio Code

  • Learn 4 ways to refresh views in SwiftUI.

    1. @State
    2. @Observable
    3. Using .refreshable on a List
    4. Using the id Modifier
  • Today, I’ll demonstrate how to migrate your Combine code over to AsyncAlgorithms, with a fully open-source tutorial project you can code along with.

  • Uses of the functional programming language include formal mathematics, software and hardware verification, AI for math and code synthesis, and math and computer science education.

  • For those who don’t follow Swift’s development, ABI stability has been one of its most ambitious projects and possibly it’s defining feature, and it finally shipped in Swift 5. The result is something I find endlessly fascinating, because I think Swift has pushed the notion of ABI stability farther than any language without much compromise.

    So I decided to write up a bunch of the interesting high-level details of Swift’s ABI. This is not a complete reference for Swift’s ABI, but rather an abstract look at its implementation strategy. If you really want to know exactly how it allocates registers or mangles names, look somewhere else.

  • This is an extension that will allow a Global Actor to initiate a run command similar to MainActor. I took the signature from the MainActor definition itself.

    extension CustomActor {
        public static func run<T>(resultType: T.Type = T.self, body: @CustomActor @Sendable () throws -> T) async rethrows -> T where T : Sendable {
            try await body()
        }
    }
  • Turn Haskell expressions into pointfree style in your browser with WASM

  • Manage project commands using Swift

    Inspired by Make, built for convenience

  • Metal provides the lowest-overhead access to the GPU, enabling you to maximize the graphics and compute potential of your app on iOS, macOS, and tvOS. Every millisecond and every bit is integral to a Metal app and the user experience–it’s your responsibility to make sure your Metal app performs as efficiently as possible by following the best practices described in this guide. Unless otherwise stated, these best practices apply to all platforms that support Metal.

  • As one of the early adopters of Apple TV and tvOS, Gilt Groupe was recently selected to present their “Gilt on TV” app at the Apple Keynote event in September.

    This presentation covers Gilt's discoveries during the process of building a tvOS app from scratch in Swift.

    It was presented at iOSoho on October 12, 2015 in New York City.

  • Resolution by iOS device

  • I don’t think websites were ever intended to be made only by “web professionals.” Websites are documents at heart. Just about everyone knows how to make a document in this digital age, be it Word, Google Docs, Markdown, or something else. HTML shouldn’t be an exception. Sure it’s a bit more technical than other types of documents, but it’s also very special.

    It’s the document format of the web. The humble HTML document is ubiquitous. It’s everywhere. If you looked at a website today, you almost certainly saw HTML.

    HTML is robust. You could look at a website made today or one made twenty years ago. They both use HTML and they both work. That is an achievement that not many document formats can claim. You also don’t need any special program to make an HTML document. Many exist, and you could use any of them. You could also just open Notepad and write HTML by hand (spoiler: we are going to do just that).

    I created this web book because I wanted something for people who don’t consider themselves professional web developers. Imagine if Word documents were only ever created by “Word professionals.” No. Knowing how to write some HTML and put it on the web is a valuable skill that is useful to all sorts of professional and personal pursuits. It doesn’t belong only to those of us who make websites as a career. HTML is for everyone. HTML is for people.

  • Make your app more responsive by examining the event-handling and rendering loop.

    Human perception is adept at identifying motion and linking cause to effect through sequential actions. This is important for graphical user interfaces because they rely on making the user believe a certain interaction with a device causes a specific effect, and that the objects onscreen behave sufficiently realistically. For example, a button needs to highlight when a person taps or clicks it, and when someone drags an object across the screen, it needs to follow the mouse or finger.

    There are two ways this illusion can break down:

    • The time between user input and the screen update is too long, so the app’s UI doesn’t seem like it’s responding instantaneously anymore. A noticeable delay between user input and the corresponding screen update is called a hang. For more information, see Understanding hangs in your app.
    • The motion onscreen isn’t fluid like it would be in the real world. An example is when the screen seems to get stuck and then jumps ahead during scrolling or during an animation. This is called a hitch. For more information, see Understanding hitches in your app.

    This article covers different types of user interactions and how the event-handling and rendering loop processes events to handle them. This foundational knowledge helps you understand what causes hangs and hitches, how the two are similar, and what differentiates them.

  • Determine the cause for delays in user interactions by examining the main thread and the main run loop.

    A discrete user interaction occurs when a person performs a single well-contained interaction and the screen then updates. An example is when someone presses a key on the keyboard and the corresponding letter then appears onscreen. Although the software running on the device needs time to process the incoming user input event and compute the corresponding screen update, it’s usually so quick that a human can’t perceive it and the screen update seems instantaneous.

    When the delay in handling a discrete user interaction becomes noticeable, that period of unresponsiveness is known as a hang. Other common terms for this behavior are freeze because the app stops updating, and spin based on the spinning wait cursor that appears in macOS when an app is unresponsive.

    Although discrete interactions are less sensitive to delays than continuous interactions, it doesn’t take long for a person to perceive a gap between an action and its reaction as a pause, which breaks their immersive experience. A delay of less than 100 ms in a discrete user interaction is rarely noticeable, but even a few hundred milliseconds can make people feel that an app is unresponsive.

    A hang is almost always the result of long-running work on the main thread. This article explains what causes a hang, why the main thread and the main run loop are essential to understanding hangs, and how various tools can detect hangs on Apple devices.

  • Determine the cause of interruptions in motion by examining the render loop.

    Human perception is very sensitive to interruptions in motion. When a fluid motion onscreen gets stuck for a short time, even a couple of milliseconds can be noticeable. This type of interruption is known as a hitch. Hitches happen during continuous interactions, like scrolling or dragging, or during animations. Each hitch impacts the user experience, so you want as few hitches as possible in your app.

    An interruption in motion occurs when the display doesn’t update at the expected pace. The display doesn’t update in time when the next frame isn’t ready for display, so the frame is late.

    A delay due to a late frame often causes the system to skip one or more subsequent frames, which is why such behavior is also referred to as a frame drop. However, dropping a frame is just one potential response the system uses to recover from a late frame, and not every hitch causes a frame drop.

    When a frame is late, it’s usually due to a delay occurring somewhere in the render loop. These delays are the result of a delay in the main thread, most often in the commit phase, known as a commit hitch, or a delay in the render phase, known as a render hitch.

  • Reacting to property changes is fairly straightforward using the @Observable macro as well. You can simply use the willSet or didSet property observers to listen for changes.

  • CatColab is...

    Concepts

    While CatColab has a notebook-style interface that draws inspiration from computational notebooks like JupyterLab and structured document editors like Notion and Coda, its conceptual underpinnings are quite different from both of those classes of tools. Here is an overview of the concepts that you'll encounter in CatColab today:

    Logic

    CatColab is not a general-purpose programming or modeling language but is rather an extensible environment for working in domain-specific logics, such as those of database schemas or biochemical regulatory networks. To do anything in the tool besides write text, you'll need to choose a logic.

    Model

    Models in CatColab are models within a logic, such as a particular database schema or regulatory network. Models are specified declaratively and are well-defined mathematical objects. CatColab is a structure editor for models in a logic, allowing formal declarations to be intermixed with descriptive rich text.

    Analysis

    Unlike most computational notebooks, CatColab strictly separates the specification of a model from any outputs derived from it. Depending on the logic, an analysis of a model might include visualization, simulation, identification of motifs, and translation into other formats.

    Future of versions of CatColab will display the further concepts of instances, morphisms, and migrations, to be described when they become available.

  • Add your published Swift package as a local package to your app’s project and develop the package and the app in tandem.

    Swift packages are a convenient and lightweight solution for creating a modular app architecture and reusing code across your apps or with other developers. Over time, you may want to develop your published Swift package in tandem with your app, or create a sample app to showcase its features. To develop a Swift package in tandem with an app, you can leverage the behavior whereby a local package overrides a package dependency with the same name:

    1. Add the Swift package to your app as a package dependency instead of a local package, as described in Editing a package dependency as a local package.
    2. Develop your app and your Swift package in tandem, and push changes to their repositories.
    3. If you release a new version of your Swift package or want to stop using the local package, remove it from the project to use the package dependency again.
  • Indicates that the view should receive focus by default for a given namespace.

    This modifier sets the initial focus preference when no other view has focus. Use the environment value resetFocus to force a reevaluation of default focus at any time.

    The following tvOS example shows three buttons, labeled “1”, “2”, and “3”, in a VStack. By default, the “1” button would receive focus, because it is the first child in the stack. However, the prefersDefaultFocus(_:in:) modifier allows button “3” to receive default focus instead. Once the buttons are visible, the user can move down to and focus the “Reset to default focus” button. When the user activates this button, it uses the ResetFocusAction to reevaluate default focus in the mainNamespace, which returns the focus to button “3”.

    struct ContentView: View {
        @Namespace var mainNamespace
        @Environment(\.resetFocus) var resetFocus
        
        var body: some View {
            VStack {
                Button ("1") {}
                Button ("2") {}
                Button ("3") {}
                    .prefersDefaultFocus(in: mainNamespace)
                Button ("Reset to default focus") {
                    resetFocus(in: mainNamespace)
                }
            }
            .focusScope(mainNamespace)
        }
    }

    The default focus preference is limited to the focusable ancestor that matches the provided namespace. If multiple views express this preference, then SwiftUI applies the current platform rules to determine which view receives focus.

  • Prevents the view from updating its child view when its new value is the same as its old value.

  • Use Instruments to analyze the performance, resource usage, and behavior of your apps. Learn how to improve responsiveness, reduce memory usage, and analyze complex behavior over time.

  • Learn how to analyze hangs with Instruments.

    The following tutorials show you how to use Instruments to find a hang in an app, analyze what’s causing it to hang, and then try out various solutions to fix the problem. Through multiple iterations of recording and analyzing data — and iterating on changes to your code — you’ll apply fixes and ultimately end up with working code that doesn’t result in a hang. Many of the principles detailed here can also be found in the WWDC23 session, Analyze hangs with Instruments.

  • This has led me to create Global Actors with no custom functionality. This isn’t how most of us are thinking about actors, but it allows us to do some powerful things.

    1. Avoid dumping too much logic into an Actor. The removes the threat of Massive Actors. And leaves us more options as the codebase evolves.
    2. Separate the logic in our code from how it is run. This is a powerful technique I’ve used for years to allow code I work with to scale.
  • So assuming you have an Intel Mac, follow these instructions to use Boot Camp to install Windows 11.

  • One of OCaml’s flagship features is what they call “abstract types”. In essence, this lets programmers declare a type, even as a type synonym inside a module and, by hiding its definition in the signature, make it appear abstract to the outside world.

  • Spend time with stories that matter

    StoryTime is for balanced teams that want to work with user stories.

    Developed by a team of Pivotalumni, it aims to capture the best of tools we’ve used in the past, while improving on their weaknesses.

    There are many tools available for the problems we want to address. We think you’ll find StoryTime compelling if you agree with some rough guiding principles:

    • Teams (not individuals, or even pairs) are the fundamental unit of allocation.
    • There is no story “priority," only position.
    • Acceptance is an important concept.
    • It’s better to work to select the right tools than to configure them to behave correctly.
    • User stories are flexible, powerful, and useful for thinking and communicating about software.

    StoryTime is very new. Because tools people count on are being retired, we’re pushing to preview as early as we can tolerate. We intend to invite people who sign up for updates to use the alpha version soon. We'll also soon share a public roadmap, in addition to writing more about our vision for StoryTime.

    We're a small, bootstrapping team. We’re not going to take VC money to seek market share or usage numbers, or an exit. We don’t want to exit. We want this software to be available to people it fits with, for a price that's well worth it. We’re working to make that happen.

  • Learn how to harness the power of Swift’s advanced type system, and make it a powerful ally and assistant.

  • It seems at some point, even though UserDefaults is intended for non-sensitive information, it started getting marked as data that needs to be encrypted and cannot be accessed until the user unlocked their device. I don’t know if it’s because Apple found developers were storing sensitive data in there even when they shouldn’t be, but the result is even if you just store something innocuous like what color scheme the user has set for your app, that theme cannot be accessed until the device is unlocked.

  • This article goes in-depth on how to create Shell scripts to manage many parts of a Swift Package’s lifecycle. If you’re just after the scripts and basic info, have a look at the SwiftPackageScripts project.

  • In iOS 15 and later, the system may, depending on device conditions, prewarm your app — launch nonrunning application processes to reduce the amount of time the user waits before the app is usable. Prewarming executes an app’s launch sequence up until, but not including, when main() calls UIApplicationMain(_:_:_:_:). This provides the system with an opportunity to build and cache any low-level structures it requires in anticipation of a full launch.

  • Since reference types do not directly store their data, we only incur reference counting costs when copying them. There is more involved into it than just incrementing and decrementing an integer. Each operation requires several levels of indirection and must be performed atomically, since heap can be shared between multiple threads at the same time.

  • App Icons are the first touchpoint with your user and they serve as business cards of your product. Adding depth to it can elevate your App’s personality in an impactful way! Make sure to experiment with sketches, blending modes and shadows to find the rendering process that best conveys your style and the level of realism you were looking for.

  • Reminder: Apple Watches use 32 bit pointers

  • Build retro games using WebAssembly for a fantasy console

  • Core Data is a powerful framework that allows you to manage the persistent model layer of your application and, while it is a first-party solution that has been a standard in the Apple ecosystem for many years, it is dated and is not straightforward to use.

    In fact, the community has been asking for many years for a more modern and easier-to-use alternative to Core Data and, those wishes were finally granted with the introduction of the SwiftData framework in WWDC23. While SwiftData is much simpler to set up and interact with than Core Data, it is a wrapper around Core Data and, as such, it inherits a lot of the bagage that developers dreaded when working with Core Data.

    One of the biggest challenges that seems to catch a lot of people off guard when working with Core Data and by extension SwiftData is managing models across different concurrency contexts. Swift Data and Core Data models are not Sendable or thread-safe so they are not safe to be shared across different threads.

  • SwiftUI provides a powerful mechanism called the environment, allowing data to be shared across views in a view hierarchy. This is particularly useful when we need to pass data or configuration information from a parent view down to its children, even if they’re many levels deep. It enhances the flexibility of SwiftUI views and decouples the data source from its destination.

    SwiftUI includes a predefined list of values stored in the EnvironmentValues struct. These values are populated by the framework based on system state and characteristics, user settings or sensible defaults. We can access these predefined values to adjust the appearance and behavior of custom SwiftUI views, and we can also override some of them to influence the behavior of built-in views. Additionally, we can define our own custom values by extending the EnvironmentValues struct.

    In this post, we'll explore various ways to work with the SwiftUI environment, including reading and setting predefined values, creating custom environment keys, and using the environment to pass down actions and observable classes.

  • Create intuitive and easily manipulated user-interactive controls for your tvOS app.

    On Apple TV, people use a remote or game controller to navigate through interface elements like movie posters, apps, or buttons, highlighting each item as they come to it. The highlighted item is said to be focused or in focus. It appears elevated or otherwise distinct from other items. An item is considered focused when the user has highlighted it, but not selected it. The user moves focus by navigating through different UI items, which triggers a focus update.

  • Parallax is a subtle visual effect the system uses to convey depth and dynamism when an element is in focus. As an element comes into focus, the system elevates it to the foreground, gently swaying it while applying illumination that makes the element’s surface appear to shine. After a period of inactivity, out-of-focus content dims and the focused element expands.

    Layered images are required to support the parallax effect.

  • I'm so sorry someone put you up to this. But I'll provide some tips that can maybe help out. I'm going to cover what a pixel artist needs to know and try to avoid any technical explanations for why things are such a pain (cough NTSC) (and I guess Woz isn't entirely blameless here).

    Update! Was not expecting this to be so popular, sorry I don't have the screen captures ready. I don't have a working CRT anymore so captures are just with a capture card. Applewin does a really good job with the actual colors (it matches what you'd see on a CRT better than the capture device does). The "real world" capture will mostly show how the aspect ratio is a bit more squashed horizontally. This is most noticeable if you're drawing circles (and this is another case where it's adjustable on a real TV so there's not necessarily a "right" answer to what the value should be). Capture card output is important in one case: if I am capturing for a demoparty it's going to look like it does from the capture card so keep that in mind.

    Anyway, these are your pixelart options, roughly ranked in level of complications you'll encounter trying to make things work. (Honestly if you're making art for me it might just be easier to make ZX Spectrum or IBM CGA art as those convert relatively easily to Apple II).

    • Monochrome hi-res 280x192
    • Four color hi-res 140x192
    • Six color hi-res 140x192
    • Fifteen color lo-res (40x48)
    • Text mode / Mouse-text
    • Fifteen color double lo-res (80x48)
    • Fifteen color double hi-res (140x192)
    • Fifteen color cycle-counted mode (40x96)

September

There are so many people advocating for the use of URLProtocol for mocking HTTP requests in Swift that I couldn’t believe how quickly it fell apart for me. In fact, I found more writing about using URLProtocol as a mock than I did about using URLProtocol for its intended purpose. This post is about the shortcomings that I encountered, and how I solved them by mocking URLSession instead.

Use ActivityKit to receive push tokens and to remotely start, update, and end your Live Activity with ActivityKit notifications.

ActivityKit offers functionality to start, update, and end Live Activities from your app. Additionally, it offers functionality to receive push tokens. You can use push tokens to update and end Live Activities with ActivityKit push notifications that you send from your server to Apple Push Notification service (APNs). Starting with iOS 17.2 and iPadOS 17.2, you can also start Live Activities by sending ActivityKit push notifications to push tokens.

Starting with iOS 18 and iPadOS 18, you can use broadcast push notifications to update Live Activities for a large audience. You can subscribe to updates on a channel using ActivityKit, and update or end Live Activities for everyone subscribed by sending an ActivityKit push notification on a channel to APNs from your remote server.

There’s a lot to say about animations — we could discuss the theory, the different curves, and how to ensure continuity. We could also dive into the design of animations. However, today we’ll focus solely on how to implement animations in SwiftUI. There are several ways to do this, but we’ll cover four main approaches.

Earlier this year, I read Martin Uecker's proposal N3212 to add parametric polymorphism to C. It's easy to scoff at the idea of adding generic programming to plain C, when C++ already has templates, and nearly every modern systems language has some form of generic programming support, but it got me thinking there is an opportunity to do something meaningfully and usefully different from those other languages. C++ templates rely on monomorphization, meaning that when you write a generic function or type, the compiler generates a distinct specialization for every set of types you use it with. Most other systems-ish languages follow C++'s lead, because monomorphization allows each specialization to be individually emitted and optimized specifically for the set of types it's instantiated on, and the resulting specializations don't need any runtime support to handle different types. However, monomorphization also implies a much more complicated compilation and linking model, where the source code (or some intermediate representation thereof) of generic definitions has to be consistently available to the compiler in order to generate new instantiations as needed.

Welcome to the 88x31 archive on hellnet.work! This site contains 31,119 unique* 88x31 buttons that I scraped from the GeoCities archives compiled by the incredible ARCHIVE TEAM before GeoCities' demise in late 2009.

There is also a background page with stats and interesting links!

Update 07/24: A scan of previously unchecked geocities subsites revealed even more 88x31 buttons in the archive! Discovered 1,862 new 88x31 banners which raises the total to 31,119!

The support library and macros allowing Swift code to easily call into Java libraries.

SwiftPM Snippets are one of the most powerful features of the Swift Package Manager, and yet two years after their introduction few developers know they exist. This tutorial will explain some of the advantages of using SwiftPM Snippets and show you how to add Snippets to a Swift package.

In this tutorial, we will use the Apple DocC tool to preview and iterate on Snippets locally. The DocC tool itself does not support rendering clickable references within Snippets, however the finished SwiftPM project containing Snippets can be published to a platform like Swiftinit where the Snippets will be rendered with clickable references, allowing readers to interact with the symbols contained within them and navigate to supplemental documentation.

All information about how to easily debug tvOS

You'll likely see a lot of Unison code during the Unison Forall conference. Here are some basics to get you started.

Learn how to quickly pair your iPhone, iPad, iPod touch, or Mac to your Apple TV 4K or Apple TV HD.

Pair your iPhone, iPad, or iPod touch with your Apple TV

With just a tap, you can pair your iOS device with your Apple TV so you can use your iOS device as a remote or keyboard. You can also use AirPlay* and screen sharing without having to enter a four-digit pin each time. Here's how:

  1. On your Apple TV, go to Settings > Remotes and Devices > Remote App and Devices.
  2. Unlock your iOS device and bring it close to your Apple TV.
  3. When you see a message on your iOS device that says Pair Apple TV, tap Pair.
  4. On your iOS device, enter the four-digit pin that appears on your TV.
  5. When paired, your iOS device appears under Devices on your Apple TV.
>

I am part of a Lean formalization project in analytic number theory (using Lean 4). I would like your assistance on one step in the formalization, which is to deduce one version $\sum_{p \leq x} \log p = x + o(x)$ of the prime number theorem from another version $\sum_{n \leq x} \Lambda(n) = x + o(x)$. The code is provided below, with both of the forms of the PNT given with "sorry"s in their proof. What I would like to do is to fill in the "sorry" for chebyshev_asymptotic (leaving the sorry for WeakPNT unfilled). I understand that this will be dependent on the methods available in Mathlib, and on the precise version of Lean 4 used, which may not be in your training data. However, if you can perhaps provide a plausible breakdown of the possible proof of chebyshev_asymptotic into smaller steps, each of which can be filled at present by a further sorry, we can start from there, see if it compiles, and then work on individual sorries later.

Create, organize, and annotate symbol images using SF Symbols.

SF Symbols 4 offers a set of over 4,000 consistent, highly configurable symbol images that you can use in your app. You can apply stylistic traits typically associated with text, such as applying colors, text style, weight, and scale. Symbols contain additional traits that allow them to integrate seamlessly with surrounding text, and adapt to platform features like Dynamic Text and Dark Mode.

You can create your own custom symbol images with the same capabilities that SF Symbols provides. To create your custom symbol:

  1. Export an SVG file from the SF Symbols app.
  2. Edit the SVG file in a vector-drawing app.
  3. Export the file from your drawing app as an SVG file.
  4. Validate the SVG file using the SF Symbols app.
  5. Import the custom symbol into the SF Symbols app and organize it into a group.
  6. Add annotations, if necessary.
  7. Export a template file for distribution.

One way to begin creating your own symbol is by basing it on an existing symbol you find in the SF Symbols app. For example, the circle symbol can give you a great reference point to start working with.

The Swift toolchain for Android is the culmination of many years of community effort, in which we (the Skip team) have played only a very small part.

Even before Swift was made open-source, people have been tinkering with getting it running on Android, starting with Romain Goyet’s “Running Swift code on Android” attempts in 2015, which got some basic Swift compiling and running on an Android device. A more practical example came with Geordie J’s “How we put an app in the Android Play Store using Swift” in 2016, where Swift was used in an actual shipping Android app. Then in 2018, Readdle published “Swift for Android: Our Experience and Tools” on integrating Swift into their Spark app for Android. These articles provide valuable technical insight into the mechanics and complexities involved with cross-compiling Swift for a new platform.

In more recent years, the Swift community has had various collaborative and independent endeavors to develop a usable Swift-on-Android toolchain. Some of the most prominent contributors on GitHub are @finagolfin, @vgorloff, @andriydruk, @compnerd, and @hyp. Our work merely builds atop of their tireless efforts, and we expect to continue collaborating with them in the hopes that Android eventually becomes a fully-supported platform for the Swift language.

Looking towards the future, we are eager for the final release of Swift 6.0, which will enable us to publish a toolchain that supports all the great new concurrency features, as well as the Swift Foundation reimplementation of the Foundation C/Objective-C libraries, which will give us the the ability to provide better integration between Foundation idioms (bundles, resources, user defaults, notifications, logging, etc.) and the standard Android patterns. A toolchain is only the first step in making native Swift a viable tool for building high-quality Android apps, but it is an essential component that we are very excited to be adding to the Skip ecosystem.

I recently ran into a funny bug with deep links.

Sometimes, when tapping a push notification, some users reported the destination screen appearing twice - the app would open, navigate to the correct screen, but the screen push transition would happen twice.

I began investigating, unaware how deep this rabbit hole would go.

Sets the preferred visibility of the non-transient system views overlaying the app.

Use this modifier to influence the appearance of system overlays in your app. The behavior varies by platform.

In iOS, the following example hides every persistent system overlay. In visionOS 2 and later, the SharePlay Indicator hides if the scene is shared through SharePlay, or not shared at all. During screen sharing, the indicator always remains visible. The Home indicator doesn’t appear without specific user intent when you set visibility to hidden. For a WindowGroup, the modifier affects the visibility of the window chrome. For an ImmersiveSpace, it affects the Home indicator.

Affected non-transient system views can include, but are not limited to:

  • The Home indicator.
  • The SharePlay indicator.
  • The Multitasking Controls button and Picture in Picture on iPad.
>

See an overview of potential source compatibility issues.

Swift 6 includes a number of evolution proposals that could potentially affect source compatibility. These are all opt-in for the Swift 5 language mode.

Encapsulate view-specific data within your app’s view hierarchy to make your views reusable.

Store data as state in the least common ancestor of the views that need the data to establish a single source of truth that’s shared across views. Provide the data as read-only through a Swift property, or create a two-way connection to the state with a binding. SwiftUI watches for changes in the data, and updates any affected views as needed.

Don’t use state properties for persistent storage because the life cycle of state variables mirrors the view life cycle. Instead, use them to manage transient state that only affects the user interface, like the highlight state of a button, filter settings, or the currently selected list item. You might also find this kind of storage convenient while you prototype, before you’re ready to make changes to your app’s data model.

A control for selecting from a set of mutually exclusive values by index.

So, how far are we away from actually working without builds in HTML, CSS and Javascript? The idea of “buildless” development isn’t new - but there have been some recent improvements that might get us closer. Let’s jump in.

The obvious tradeoff for a buildless workflow is performance. We use bundlers mostly to concatenate files for fewer network requests, and to avoid long dependency chains that cause "loading waterfalls". I think it's still worth considering, but take everything here with a grain of performance salt.

Impactful Technical Leadership

As an engineering manager, you almost always have someone in your company to turn to for advice: a peer on another team, your manager, or even the head of engineering. But who do you turn to if you're the head of engineering? Engineering executives have a challenging learning curve, and many folks excitedly start their first executive role only to leave frustrated within the first 18 months.

In this book, author Will Larson shows you ways to obtain your first executive job and quickly ramp up to meet the challenges you may not have encountered in non-executive roles: measuring engineering for both engineers and the CEO, company-scoped headcount planning, communicating successfully across a growing organization, and figuring out what people actually mean when they keep asking for a "technology strategy."

The Cultural Atlas is an educational resource providing comprehensive information on the cultural background of Australia’s migrant populations. The aim is to improve social cohesion and promote inclusion in an increasingly culturally diverse society.

The oklch() functional notation expresses a given color in the Oklab color space. oklch() is the cylindrical form of oklab(), using the same L axis, but with polar Chroma (C) and Hue (h) coordinates.

Paste HEX/RGB/HSL to convert to OKLCH

Picking color and creating balanced color palettes with Figma is not an easy task, HSL and HSB are not perceptually uniform, HSL's lightness is relative to the current hue, so for each of them, the real perceived 50% lightness is not at L 50.

Same problem with hue, if we make a palette from hue 0 to 70 with the same incremental value, we'll get a palette that is not perceptually progressive, some hue changes will seem bigger than others.

We also have a problem known as the “Abney effect”, mainly in the blue hues. If we take the hue 240, it shift from blue to purple when we update the lightness.

OkColor solves all these problems and more, its params are reliable and uniform, you know what you'll get.

If we change the hue of a color in OkLCH and keep the same lightness value, we know that the resulting color will have the same perceived lightness.

You can also easily create perceptually uniform color palettes, and do more advanced things with OkLCH like picking colors in P3 space and use the relative chroma (see this thread for more infos).

Use TypeScript as your preprocessor. Write type‑safe, locally scoped classes, variables and themes, then generate static CSS files at build time.

Learn the principles of the App Intents framework, like intents, entities, and queries, and how you can harness them to expose your app's most important functionality right where people need it most. Find out how to build deep integration between your app and the many system features built on top of App Intents, including Siri, controls and widgets, Apple Pencil, Shortcuts, the Action button, and more. Get tips on how to build your App Intents integrations efficiently to create the best experiences in every surface while still sharing code and core functionality.

Build, compile, and execute compute graphs utilizing all the different compute devices on the platform, including GPU, CPU, and Neural Engine.

Metal Performance Shaders Graph provides high-performance, energy-efficient computation on Apple platforms by leveraging different hardware compute blocks. You can use this framework to generate a symbolic compute graph of operations, where each operation can output a set of tensors used as edges of the graph. The tensors represent multidimensional data that objects like MTLBuffer or MTLTexture can back. After you construct the graph, you can compile it into an executable to optimize for performance and subsequently run the executable on your input data. This framework also provides the ability to serialize the executables and load executables from a serialized .mpsgraphpackage.

Create Better Backends with Swift

Hummingbird takes advantage of Swift to make it easy and enjoyable to create robust backends.

A crowd sourced repository for examples of Swift's native Regex type.

Swift IDE

Write and run Swift code easily and professionally!

Swifty Compiler app is a great way to get an algorithm or method down on the go and make sure it works.

You can use it as a playground to test Swift code quickly or review concepts.

August

Lessons for Individual Contributors and Managers from 10 Years at Google

In this insightful and comprehensive guide, Addy Osmani shares more than a decade of experience working on the Chrome team at Google, uncovering secrets to engineering effectiveness, efficiency, and team success. Engineers and engineering leaders looking to scale their effectiveness and drive transformative results within their teams and organizations will learn the essential principles, tips, and frameworks for building highly effective engineering teams.

Osmani presents best practices and proven strategies that foster engineering excellence in organizations of all sizes. Through practical advice and real-world examples, Leading Effective Engineering Teams empowers you to create a thriving engineering culture where individuals and teams can excel. Unlock the full potential of your engineering team and achieve unparalleled success by harnessing the power of trust, commitment, and accountability.

Swift is an exciting Open Source programming language developed by Apple. Swift on RISC-V is all about getting Swift onto RISC-V devices. This can be anything from small developer boards and IOT devices to high-performance cloud servers and PCs.

I am Neil Jones, the engineer responsible for the porting of Swift to RISC-V and the creator of the “Swift on RISC-V” project. Additionally, I am the creator and maintainer of the “Swift Community Apt Repository”. This includes the building of and creation of all packages hosted on this repository.

Want to install Swift on riscv64 in 3 easy steps using the Swift Community Apt Repository? Let’s dive in!

From Apple's Data and Privacy page, you can request to transfer the playlists that you’ve made in Apple Music to YouTube Music.

Transfer playlists from Apple Music

  • When you transfer playlists to YouTube Music, they aren’t deleted from Apple Music. >
  • The transfer process typically takes a few minutes, although it might take up to several hours depending on the number of playlists that you’re transferring.
>

You can request to transfer the playlists that you've made in YouTube Music to Apple Music.

Transfer playlists to Apple Music

  • When you transfer playlists to Apple Music, they aren’t deleted from YouTube Music. >
  • The transfer process typically takes a few minutes, although it might take up to several hours depending on the number of playlists that you’re transferring.
>

Hi everyone. For Embedded Swift and other low-overhead and performance sensitive code bases, we're looking into improving Swift's support for fixed-capacity data structures with inline storage. @Alejandro has been working on adding support for integer generic parameters, which is one step towards allowing these sorts of types to be generic over their capacity.

This may be going even further off-topic but it feels like everyone is trying to find workarounds for not being able to access swift's built in SwiftSyntax. That's why I wrote Swift macros without requiring swift-syntax last year and why I tried to discuss Passing Syntax directly to the macro without swift-syntax a few days ago, and this project, and I assume many more POCs.

Drag, drop, done.

Rewrite Git history with a single drag-and-drop. Undo anything with ⌘Z. All speed, no bumps.

Keyoxide is a decentralized tool to create and verify decentralized online identities.

Just like passports for real life identities, Keyoxide can be used to verify the online identity of people to make sure one is interacting with whom they intended to be and not an imposter.

Unlike real life passports, Keyoxide works with online identities or "personas", meaning these identities can be anonymous and one can have multiple separate personas to protect their privacy, both online and in real life.

Here is what a Keyoxide profile looks like.

Get started and create your own!

Swift Macros, while powerful, can hinder build times. This blog post explains why and what we can do to mitigate the issue. > Swift Macros were introduced in September 2023 alongside Xcode 15 and have become a powerful tool for developers to leverage the compiler to generate code. The community quickly adopted them and started building and sharing them as Swift Packages that teams could integrate into their projects. At Tuist, we started using Mockable as a tool to generate mocks from protocols, which we had previously been doing manually.

However, Swift Macros quickly revealed a serious challenge: they can significantly increase build times, causing slow feedback cycles both locally and in CI environments. This blog post aims to explain where the build time slowness comes from, what potential solutions we might see Apple adopting, and what we can do in the meantime to mitigate the issue.

WebAssembly to the rescue

There’s a technology that ticks all the boxes for what a Swift Macro needs:

  • A way to run safely in a runtime.
  • A way to ship a compiled version of it that runs in any version of the runtime.

That technology is WebAssembly, and Kabir Oberai had the brilliant idea to support that as the technology to run Swift Macros. And thanks to the WasmKit runtime, the problem is not only solved for the Darwin platform but also for Windows and Linux. There’s an ongoing conversation in the Swift Community forum, so hopefully, we’ll see this technology being adopted soon, which will require Swift Macro authors to compile their Swift Macros to .wasm binaries and ship them alongside the source code.

Spatial Computing with visionOS

Step Into the world of visionOS development with SwiftUI, RealityKit, and ARKit.

Access the elements of a collection.

Classes, structures, and enumerations can define subscripts, which are shortcuts for accessing the member elements of a collection, list, or sequence. You use subscripts to set and retrieve values by index without needing separate methods for setting and retrieval. For example, you access elements in an Array instance as someArray[index] and elements in a Dictionary instance as someDictionary[key].

You can define multiple subscripts for a single type, and the appropriate subscript overload to use is selected based on the type of index value you pass to the subscript. Subscripts aren’t limited to a single dimension, and you can define subscripts with multiple input parameters to suit your custom type’s needs.

As a developer for Apple platforms, you probably work on multiple projects with different coding styles and conventions and have to find yourself adjusting Xcode’s editor settings every time you switch between projects. This can be a tedious process that you might forget to do or overlook and, if the project does not have a linter that enforces the coding style, you might end up with inconsistent code formatting across the codebase.

Thankfully Xcode 16 adds support for EditorConfig files, which allows you to define Xcode editor settings in a programmatic way on a per-project basis. In this article, you will learn how to set up EditorConfig files in Xcode and what settings are supported at this time.

Learn how you can use Swift 5.7 to design advanced abstractions using protocols. We'll show you how to use existential types, explore how you can separate implementation from interface with opaque result types, and share the same-type requirements that can help you identify and guarantee relationships between concrete types. To get the most out of this session, we recommend first watching “Embrace Swift generics" from WWDC22.

So those are my favorite SwiftUI additions from WWDC 2024. As we have seen in recent years, Apple is gradually converging the OSes, especially with SwiftUI, which means that a lot of code is no longer platform-specific. But as developers, we still have the responsibility to make our apps look, feel and behave in a way that suits the platform, whatever it is.

One other point is that a lot of resources will state that they are for iOS, but many of them are totally valid for macOS too. Don’t skip an article or video just because it doesn’t label itself as specifically for macOS.

Request permission to display alerts, play sounds, or badge the app’s icon in response to a notification.

Local and remote notifications get a person’s attention by displaying an alert, playing sounds, or badging your app’s icon. These interactions occur when your app isn’t running or is in the background. They let people know that your app has relevant information for them to view. Because a person might consider notification-based interactions disruptive, you must obtain permission to use them.

Learn what to do if your Apple devices don’t see Apple push notifications when connected to a network.

If you use a firewall or private Access Point Name for cellular data, your Apple devices must be able to connect to specific ports on specific hosts:

  • TCP port 5223 to communicate with APNs.
  • TCP port 443 or 2197 to send notifications to APNs.

TCP port 443 is used during device activation, and afterwards for fallback if devices can't reach APNs on port 5223. The connection on port 443 uses a proxy as long as the proxy allows the communication to pass through without decrypting.

The APNs servers use load balancing, so your devices don't always connect to the same public IP address for notifications. It's best to let your device access these ports on the entire 17.0.0.0/8 address block, which is assigned to Apple.

Taking URL’s beyond the Web in SwiftUI

Enter openURL, an Environment Key Value that, when summoned, allows you to do as it is named: open a URL. Particularly allowing you to do so somewhat programmatically within a View.

But openURL is of type OpenURLAction, which is exposed to us and gives us the ability to override openURL in the Environment.

Let’s dig in, together, to understand how these work and for what purposes would we want to use them. We’ll even learn a few creative tricks that can bust open how we can incorporate links better in our app experiences.

This document describes how to set up a development loop for people interested in contributing to Swift.

If you are only interested in building the toolchain as a one-off, there are a couple of differences:

  1. You can ignore the parts related to Sccache.
  2. You can stop reading after Building the project for the first time.
>

Swift is a mature and powerful language that can be used way beyond development for Apple platforms. Due to its low memory footprint, performance and safety features, it has become a popular choice for server-side development.

One particular use case where Swift shines is in the development of Serverless applications using AWS Lambdas and, since I have been building and deploying them for many use cases for a while now, I thought I would share my experience and some tips in this comprehensive guide.

Use the keyboard, mouse, or trackpad of your Mac to control up to two other nearby Mac or iPad devices, and work seamlessly between them.

You can instantly send bitcoin to any $cashtag or another Lightning compatible wallet for free with Cash App

Use a scene-based life cycle in SwiftUI while keeping your existing codebase.

Take advantage of the declarative syntax in SwiftUI and its compatibility with spatial frameworks by moving your app to the SwiftUI life cycle.

Moving to the SwiftUI life cycle requires several steps, including changing your app’s entry point, configuring the launch of your app, and monitoring life-cycle changes with the methods that SwiftUI provides.

A type that represents the structure and behavior of an app.

Create an app by declaring a structure that conforms to the App protocol. Implement the required body computed property to define the app’s content:

@main
struct MyApp: App {
var body: some Scene {
WindowGroup {
Text("Hello, world!")
}
}
}

Precede the structure’s declaration with the @main attribute to indicate that your custom App protocol conformer provides the entry point into your app. The protocol provides a default implementation of the main() method that the system calls to launch your app. You can have exactly one entry point among all of your app’s files.

Compose the app’s body from instances that conform to the Scene protocol. Each scene contains the root view of a view hierarchy and has a life cycle managed by the system. SwiftUI provides some concrete scene types to handle common scenarios, like for displaying documents or settings. You can also create custom scenes.

Improve your UI test’s stability by handling interface changes that block the UI elements under test.

Use XCTestCase UI interruption monitors to handle situations in which unrelated UI elements might appear and block the test’s interaction with elements in the workflow under test. The following situations could result in a blocked test:

  • Your app presents a modal view that takes focus away from the UI under test, as can happen, for example, when a background task fails and you notify the user of the failure.
  • Your app performs an action that causes the operating system to present a modal UI. An example is an action that presents a photo picker, which may make the system request access to photos if the user hasn’t already granted it.
>

Apple provided a great shortcut for customizing different View layouts just by passing some parameters within a closure syntax. With that you can manage complex and different contexts just by defining the types of parameters you are expecting to your component and them mapping the parameter types into the respective block builders to result in different layouts. This makes SwiftUI an even more powerful tool and improves the reusability of your code. I hope this helps you simplify your Views and that you enjoyed ;).

Remote push notifications are messages that app developers can send to users directly on their devices from a remote server. These notifications can appear even if the app is not open, making them a powerful tool for re-engaging users or delivering timely information. They are different from local notifications, which are scheduled and triggered by the app itself on the device.

Adding remote notifications capability to an iOS app is a quite involved process that includes several steps and components. This post will walk you through all the necessary setup so that you can enable remote push notification functionality in your iOS project.

Note that to be able to fully configure and test remote push notifications, you will need an active Apple developer account.

The environment for push notifications.

This key specifies whether to use the development or production Apple Push Notification service (APNs) environment when registering for push notifications.

Xcode sets the value of the entitlement based on your app's current provisioning profile. For example, if you're using a development provisioning profile, Xcode sets the value to development. Production provisioning profile and Prerelease Versions and Beta Testers use production. These default settings can be modified. The development environment is also referred to as the sandbox environment.

Use this entitlement for both the UserNotifications and PushKit frameworks.

To add this entitlement to your app, enable the Push Notifications capability in Xcode.

Build synchronization constructs using low-level, primitive operations.

A synchronization primitive that protects shared mutable state via mutual exclusion.

The Mutex type offers non-recursive exclusive access to the state it is protecting by blocking threads attempting to acquire the lock. Only one execution context at a time has access to the value stored within the Mutex allowing for exclusive access.

An example use of Mutex in a class used simultaneously by many threads protecting a Dictionary value:

class Manager {
let cache = Mutex<[Key: Resource]>([:])

    func saveResouce(_ resource: Resouce, as key: Key) {
    cache.withLock {
        $0[key] = resource
    }
}
}

Perform an atomic add operation and return the old and new value, applying the specified memory ordering.

Render a capture stream with rose-tinted filtering and depth effects.

The Virtual Boy in true 3D

Available in true 3D on Apple Vision Pro

VirtualFriend: A new, open source Nintendo Virtual Boy emulator

Relive the unique red-and-black world of Nintendo's most ambitious '90s console. Whether you're experiencing these classics for the first time or revisiting fond memories, VirtualFriend delivers the definitive Virtual Boy experience on Apple Vision Pro and iOS devices.

Experience the Virtual Boy in high fidelity 3D on the Apple Vision Pro, or play on the go on iOS.

  • Explore the entire official library of Virtual Boy titles and the most popular homebrew with provided metadata and 3D title screen previews
  • Tired of red and black? Adjust the display's color palette between a series of presets, or choose your own colors
  • Play using flexible controls; either touchscreen, controller, or keyboard
>

Build Metal apps quicker and easier using a common set of utility classes.

A queue for Swift concurrency

This package exposes a single type: AsyncQueue. Conceptually, AsyncQueue is very similar to a DispatchQueue or OperationQueue. However, unlike these an AsyncQueue can accept async blocks. This exists to more easily enforce ordering across unstructured tasks without requiring explicit dependencies between them.

Our goal is to gather together the very best technical minds and work with trusted partners to create the most innovative products powered by RISC-V chipsets.

For developers, will bring the highest development experience to our customers through state-of-the-art hardware, software and related services.

For consumers, we will raise the bar for RISC-V products by developing high-performance, high-quality, cost-effective innovations that will bring the advantages of RISC-V technology to everyone.

2factorauth is a non-profit organization registered in Sweden with members across the globe. Our mission is to be an independent source of information on which services support MFA/2FA and help consumers demand MFA/2FA on the services that currently don’t. Together, we’re able to get more platforms to #Support2FA.

Symbol images are vector-based icons from Apple's SF Symbols library, designed for use across Apple platforms. These scalable images adapt to different sizes and weights, ensuring consistent, high-quality icons throughout our apps. Using symbol images in SwiftUI is straightforward with the Image view and the system name of the desired symbol.

Enhancing symbol images in SwiftUI can significantly improve our app's look and feel. By adjusting size, color, rendering modes, variable values, and design variants, we can create icons that make our app more intuitive and visually appealing. SwiftUI makes these adjustments straightforward, enabling us to easily implement and refine these customizations for a better user experience.

The method we present is a partial implementation of the algorithm in Kokojima et al. 2006 paper. The reason it is not a full implementation of Kokojima's solution is that we have not adopted their more optimal multi-sampling method and have rather followed a simpler, more costly, brute-force approach, as the performance difference on Apple GPUs is likely minimal.

Since these curves can (and commonly do) have many self-intersecting loops with a mixture of convex and concave curve sections, most solutions end up with a complex pre-processing stage that builds a detailed (non-overlapping) geometry. Kokojima et al. trades off this complexity for extra GPU rendering cost.

The Kokojima et al. method consists of three distinct steps. The initial two steps are dedicated to setting up a stencil buffer, while the final step is responsible for shading the stenciled area.

Tools from the community and partners to simplify tasks and automate processes

July

Homomorphic encryption (HE) is a cryptographic technique that enables computation on encrypted data without revealing the underlying unencrypted data to the operating process. It provides a means for clients to send encrypted data to a server, which operates on that encrypted data and returns a result that the client can decrypt. During the execution of the request, the server itself never decrypts the original data or even has access to the decryption key. Such an approach presents new opportunities for cloud services to operate while protecting the privacy and security of a user’s data, which is obviously highly attractive for many scenarios.

Implement the Live Caller ID Lookup app extension to provide call-blocking and identity services.

With the Live Caller ID Lookup app extension, you can provide caller ID and call-blocking services from a server you maintain. The app extension tells the system how to communicate with your server. When someone’s device receives a phone call, the system communicates with your back-end server to retrieve caller ID and blocking information, and then displays that information on the incoming call screen and in the device’s recent phone calls.

Use UnionValue when you have some existing types and you want to take one of them. You could think of this as an “or” parameter.

You should be using non-Sendable types

I think non-Sendable types are tremendously useful. They are much easier to use with protocols. They are just as “thread-safe” as an isolated type. And now we have a way for them to have usable async methods. There is a small hole in their concurrency story. But, overall I think they can be a really powerful tool for modelling mutable state that can work with arbitrarily-isolated clients.

Non-Sendable types are great and you should use them!

IndexedEntity represents an App Entity decorated with an attribute set. A set of attributes that enable the system to perform structured indexing and queries of entities.

Ethersync enables real-time co-editing of local text files. You can use it for pair programming or note-taking, for example! Think Google Docs, but from the comfort of your favorite text editor!

Create app intents and entities to integrate your app’s photo and video functionality with Siri and Apple Intelligence. > To integrate your app’s photo and video capabilities with Siri and Apple Intelligence, you use Swift macros that generate additional properties and add protocol conformance for your app intent, app entity, and app enumeration implementation that Apple Intelligence needs. For example, if your app allows someone to open a photo, use the AssistantIntent(schema:) macro and provide the assistant schema that consists of the photos domain and the openAsset schema:

@AssistantIntent(schema: .photos.openAsset)
struct OpenAssetIntent: OpenIntent {
>     var target: AssetEntity
@Dependency
>     var library: MediaLibrary
>     @Dependency
>     var navigation: NavigationManager
@MainActor func perform() async throws -> some IntentResult {
>         let assets = library.assets(for: [target.id])
>         guard let asset = assets.first else { throw IntentError.noEntity }
>         navigation.openAsset(asset)
>         return .result()
>     }

}


To learn more about assistant schemas, see [Integrating your app with Siri and Apple Intelligence](https://developer.apple.com/documentation/appintents/integrating-your-app-with-siri-and-apple-intelligence). For a list of available app intents in the [photos](https://developer.apple.com/documentation/appintents/assistantschema/model/photos-8mzhg) domain, see [AssistantSchema.PhotosIntent](https://developer.apple.com/documentation/appintents/assistantschema/photosintent).

An interface to express that a custom type has a predefined, static set of valid values to display.

Adopt the AppEnum protocol in a type that has a known set of valid values. You might use this protocol to specify that a variable of one of your intents has a fixed set of possible values. For example, you might use a variable to specify whether to navigate to the next or previous track in a music playlist.

Because this type conforms to the StaticDisplayRepresentable protocol, provide a string-based representation of your type’s values in your implementation. For example, provide descriptions for each case of an enum type in the inherited caseDisplayRepresentations property.

Create app intents, entities, and enumerations that conform to assistant schemas to tap into the enhanced action capabilities of Siri and Apple Intelligence.

Apple Intelligence is a new personal intelligence system that deeply integrates powerful generative models into the core of iPhone, iPad and Mac. Siri draws on the capabilities of Apple Intelligence to deliver assistance that’s more natural, contextually relevant and personal to users. A big part of people’s personal context are the apps they use every day. The App Intents framework gives you a means to express your app’s capabilities and content to the system and integrate with Siri and Apple Intelligence. This will unlock new ways for your users to interact with your app from anywhere on their device.

Add assistant schemas to your app and integrate your app with Siri and Apple Intelligence, and support system experiences like Spotlight.

Using this sample app, people can keep track of photos and videos they capture with their device and can use Siri to access app functionality. To make its main functionality available to Siri, the app uses the App Intents framework.

A Swift macro you use to make sure your app intent conforms to an assistant schema.

Add reference documentation to your symbols that explains how to use them.

To help the people who use your API have a better understanding of it, follow the steps in the sections below to add documentation comments to the symbols in your project. DocC compiles those comments and generates formatted documentation that you share with your users. For frameworks and packages, add the comments to the public symbols, and for apps, add the comments to both the internal and public symbols.

For a deeper understanding of how to write symbol documentation, please refer to Writing Symbol Documentation in Your Source Files Swift.org.

Getting the dimension of an element using JavaScript is a trivial task. You barely even need to do anything. If you have a reference to an element, you’ve got the dimensions (i.e. el.offsetWidth / el.offsetHeight). But we aren’t so lucky in CSS. While we’re able to react to elements being particular sizes with @container queries, we don’t have access to a straight up number we could use to, for example, display on the screen.

It may sound impossible but it’s doable! There are no simple built-in functions for this, so get ready for some slightly hacky experimentation.

Instantly boost your productivity and launch apps quicker.

A prototype of new search features, using the strength of our AI models to give you fast answers with clear and relevant sources.

Whether you’re building the next big thing or tweaking your current project, we’re here to make the process smoother and more intuitive, built and operated by the Pixelfed project.

Specify different input parameters to generate multiple test cases from a test function.

Some tests need to be run over many different inputs. For instance, a test might need to validate all cases of an enumeration. The testing library lets developers specify one or more collections to iterate over during testing, with the elements of those collections being forwarded to a test function. An invocation of a test function with a particular set of argument values is called a test case.

By default, the test cases of a test function run in parallel with each other. For more information about test parallelization, see Running tests serially or in parallel.

Registers a handler to invoke in response to a URL that your app receives.

Use this view modifier to receive URLs in a particular scene within your app. The scene that SwiftUI routes the incoming URL to depends on the structure of your app, what scenes are active, and other configuration. For more information, see handlesExternalEvents(matching:).

UI frameworks traditionally pass Universal Links to your app using an NSUserActivity. However, SwiftUI passes a Universal Link to your app directly as a URL, which you receive using this modifier. To receive other user activities, like when your app participates in Handoff, use the onContinueUserActivity(_:perform:) modifier instead.

For more information about linking into your app, see Allowing apps and websites to link to your content.

Symbol images are vector-based icons from Apple's SF Symbols library, designed for use across Apple platforms. These scalable images adapt to different sizes and weights, ensuring consistent, high-quality icons throughout our apps. Using symbol images in SwiftUI is straightforward with the Image view and the system name of the desired symbol.

SwiftUI provides a variety of views, with a big number of them being actually actionable controls, including buttons, pickers, the toggle, slider, stepper and more. All controls have readable labels, but some of them display their label out of the area that users can interact with. For these controls specifically, it is possible to hide labels when their appearance is not desirable for various reasons. For instance, they might not fit to the look of the rest of the UI, or the control’s function is clear from the context. Managing that is extremely simple thanks to a not so well-known view modifier, details of which are presented right next.

I started working on supporting Xcode 16’s features in XcodeProj. One of those features is internal “synchronized groups”, which Apple introduced to minimize git conflicts in Xcode projects. In a nutshell, they replace many references to files in the file system with a reference to a folder containing a set of files that are part of a target. Xcode dynamically synchronizes the files, hence the name, in the same way packages are synchronized when needed.

What problem do some and any solve?

  • The impact of some is across variables. It enforces identical types to be returned. >
  • The impact of any is on a single variable. It has no enforcing to keep returning types identical.
some any
Holds a fixed concrete type Holds an arbitrary concrete type
Guarantees type relationships Erases type relationships

A parse strategy for creating URLs from formatted strings.

Create an explicit URL.ParseStrategy to parse multiple strings according to the same parse strategy. The following example creates a customized strategy, then applies it to multiple URL candidate strings.

A structure that converts between URL instances and their textual representations.

Instances of URL.FormatStyle create localized, human-readable text from URL instances and parse string representations of URLs into instances of URL.

The root object for a universal links service definition.

Properties

| | | > | ---: | :--- | | defaults applinks.Defaults | The global pattern-matching settings to use as defaults for all universal links in the domain. | | details [applinks.Details] | An array of Details objects that define the apps and the universal links they handle for the domain. | | substitutionVariables applinks.SubstitutionVariables | Custom variables to use for simplifying complex pattern matches. Each name acts as a variable that the system replaces with each string in the associated string array. |

Today’s goal is to parse URLs like http://mywebsite.org/customers/:cid/orders/:oid so that we can determine it’s a customer’s order request and extract the order #oid and customer #cid from it.

We’ll try and do that in an elegant way, using pattern matching and variable binding.

Apple Vision Pro users will experience breathtaking series, films, and more spanning action-adventure, documentary, music, scripted, sports, and travel

Starting this week, Apple is releasing all-new series and films captured in Apple Immersive Video that will debut exclusively on Apple Vision Pro. Apple Immersive Video is a remarkable storytelling format that leverages 3D video recorded in 8K with a 180-degree field of view and Spatial Audio to transport viewers to the center of the action.

Boundless, a new series that invites viewers to experience once-in-a-lifetime trips from wherever they are, premieres at 6 p.m. PT today, July 18, with “Hot Air Balloons.” The next installment of Wild Life, the nature documentary series that brings viewers up close to some of the most charismatic creatures on the planet, premieres in August. Elevated, an aerial travel series that whisks viewers around iconic vistas from staggering heights, will launch in September.

Later this year, users can enjoy special performances featuring the world’s biggest artists, starting with an immersive experience from The Weeknd; the first scripted Apple Immersive short film, Submerged, written and directed by Academy Award winner Edward Berger; a behind-the-scenes and on-the-court view of the 2024 NBA All-Star Weekend; and Big-Wave Surfing, the first installment of a new sports series with Red Bull.

“Apple Immersive Video is a groundbreaking leap forward for storytelling, offering Apple Vision Pro users remarkable experiences with an unparalleled sense of realism and immersion,” said Tor Myhren, Apple’s vice president of Marketing Communications. “From soaring over volcanoes in Hawaii and surfing huge waves in Tahiti, to enjoying performances by the world’s biggest artists and athletes from all-new perspectives, Apple Immersive Video revolutionizes the way people experience places, stories, sports, and more by making viewers feel like they’re truly there. It’s the next generation of visual storytelling, and we’re excited to bring it to more people around the world.”

Steve’s talk at the 1983 International Design Conference in Aspen

Introduction by Jony Ive

Steve rarely attended design conferences. This was 1983, before the launch of the Mac, and still relatively early days of Apple. I find it breathtaking how profound his understanding was of the dramatic changes that were about to happen as the computer became broadly accessible. Of course, beyond just being prophetic, he was fundamental in defining products that would change our culture and our lives forever.

On the eve of launching the first truly personal computer, Steve is not solely preoccupied with the founding technology and functionality of the product’s design. This is extraordinarily unusual, as in the early stages of dramatic innovation, it is normally the primary technology that benefits from all of the attention and focus.

Steve points out that the design effort in the U.S. at the time had been focused on the automobile, with little consideration or effort given to consumer electronics. While it is not unusual to hear leaders talk about the national responsibility to manufacture, I thought it was interesting that he talked about a nation’s responsibility to design.

In the talk, Steve predicts that by 1986 sales of the PC would exceed sales of cars, and that in the following ten years, people would be spending more time with a PC than in a car. These were absurd claims for the early 1980s. Describing what he sees as the inevitability that this would be a pervasive new category, he asks the designers in the audience for help. He asks that they start to think about the design of these products, because designed well or designed poorly, they still would be made.

Steve remains one of the best educators I’ve ever met in my life. He had that ability to explain incredibly abstract, complex technologies in terms that were accessible, tangible and relevant. You hear him describe the computer as doing nothing more than completing fairly mundane tasks, but doing so very quickly. He gives the example of running out to grab a bunch of flowers and returning by the time you could snap your fingers – speed rendering the task magical.

When I look back on our work, what I remember most fondly are not the products but the process. Part of Steve’s brilliance was how he learned to support the creative process, encouraging and developing ideas even in large groups of people. He treated the process of creating with a rare and wonderful reverence.

The revolution Steve described over 40 years ago did of course happen, partly because of his profound commitment to a kind of civic responsibility. He cared, way beyond any sort of functional imperative. His was a victory for beauty, for purity and, as he would say, for giving a damn. He truly believed that by making something useful, empowering and beautiful, we express our love for humanity.

Prepare your app to respond to an incoming universal link.

When a user activates a universal link, the system launches your app and sends it an NSUserActivity object. Query this object to find out how your app launched and to decide what action to take.

To support universal links in your app:

  1. Create a two-way association between your app and your website and specify the URLs that your app handles. See Supporting associated domains.
  2. Update your app delegate to respond when it receives an NSUserActivity object with the activityType set to NSUserActivityTypeBrowsingWeb.
>

On June 25th, I interviewed Tim Sweeney, Founder and CEO of Epic Games, which makes the Unreal Engine and Fortnite, and Neal Stephenson, the #1 New York Times bestselling author who also coined the term “Metaverse” in his 1992 bestseller Snow Crash, and is a Co-Founder of blockchain start-up Lamina1, and AI storytelling platform Whenere.

In the interview, we discuss their definitions of “Metaverse,” thoughts on its technological and economic growth, Neal’s reaction on the day Facebook changed its name to Meta, the future of Fortnite, Apple’s Vision Pro, blockchains, and the ethics of Generative AI, plus “Snow Crash 2," and much more.

Display content and descriptions, provide channel guides, and support multiple users on Apple TV.

Use the TVServices framework to display content prominently on the screen and to speed up user login. You can highlight media and other information from your app in the top shelf area. For example, a video playback app might show the user’s most recently viewed videos. The system displays your media items when the user selects your app on the tvOS Home Screen; your app doesn’t need to be running. You provide top shelf content using a Top Shelf app extension, which you include in the bundle of your tvOS app.

Apps that manage multiple user profiles can accelerate the login process by retaining the profile for each Apple TV user. Apple TV supports multiple user accounts, and these accounts are separate from the profiles your app manages. Mapping the system accounts to your own profiles lets users skip profile selection screens and go straight to their content, which provides a better user experience.

Support browsing an electronic program guide (EPG) and changing channels with specialized remote buttons.

Add Unique features to Xcode's Simulator and Build Apps Faster.

Key features: User Defaults Editor, Simulator Airplane Mode, Recordings with sound, touches & bezels. Accessibility & Dynamic Type Testing, Location Simulation, Test Push Notifications, Deeplinks, and compare designs on top of the Simulator.

Inspect Network Traffic

  • Monitor in- and outgoing requests for your apps
  • Explore JSON responses, requests & response headers
  • Copy requests as cURL commands
  • Investigate request metrics

Build Insights

  • Keep track of build count and duration
  • Find out how your app's build times improve per Xcode version

User Defaults Editor

  • View and Edit User Defaults values in real time
  • Works with both standard and group User Defaults

Location Simulation

  • Scenario testing: City Run, Bicycle run, and Freeway Drive
  • Simulate routes from start to destination using Quick Actions
  • Update GPS to a specific point on the map
  • Change the time zone whenever you update the location

Grids & Rulers helps you to create pixel-perfect design implementations

  • Use horizontal and vertical rulers
  • Measure the distance between elements in on-device pixels
  • Configure grid size and color

Quick actions for your recent builds help you increase productivity

  • Delete Derived Data easily, globally or per app, to prevent rebuilding all your Xcode projects.
  • Open common directories like your app's documents folder
  • Read and write user defaults
  • Grant, revoke, or reset permissions like photo and location access, allowing you to test related implementations quickly
  • Airplane mode: Disable Networking for your app while keeping a working connection on your Mac

Environment Overrides

  • Switch Accessibility settings like Inverted Colors and Bold Text
  • Configure any Dynamic Type directly from the side window

Deeplinks (Universal Links) and Push Notifications

  • Add quick actions to test Deeplinks and Push Notifications
  • Bundle Identifier based: actions automatically show up for recent builds
  • Launch deeplinks to test routing in your apps
  • Easily launch deeplinks from your clipboard
  • Manage and share Quick Action groups with your colleagues

Compare Designs for pixel-perfect design implementations

  • Create pixel-perfect implementations of your app’s design
  • Drag, Paste, or Select images for comparison
  • Use the overlay mode to compare your app’s implementation to its design
  • The slider allows you to slide between your app’s implementation and its design
  • Any image source, whether it’s Sketch, Figma, or Zeplin.

Magnify for precision

  • Zoom in at pixel level to verify your design implementation

Create screenshots

  • Device Bezels create that professional screen capture you need
  • Adjust the background color to match your styling

Create professional recordings to share progress

  • A popup next to the active Simulator allows you to start a recording easily
  • Enable touches to explain better how your app responds to user interaction
  • Device Bezels create that professional impression you need
  • Export-ready for App Store Connect. Creating App Previews has never been easier
  • Landscape orientation is correctly applied in the exported video
  • A floating thumbnail with the resulting recording allows one to drag into any destination easily
  • Select MP4 or GIF to match your needs
  • Trim videos for perfect lengths
  • Control the quality of exports for perfect performance

Completely customizable to fit your needs

  • All actions are available through the status bar menu as well. Configure to hide the floating windows if you feel like they're in your way
  • Configure shortcuts to perform actions even quicker
>

Universal Links allow you to link to content inside your app when a user opens a particular URL. Webpages will open in the app browser by default, but you can configure specific paths to open in your app if the user has it installed.

Redirecting users into your app is recommended to give them the most integrated mobile experience. A great example is the WeTransfer app that automatically opens transfer URLs, allowing the app to download the files using the most efficient system APIs. The alternative would require users to download the files via Safari, a much less integrated experience. Let’s dive in to see how you can add support for Universal Links.

Push user-facing notifications to the user’s device from a server, or generate them locally from your app.

User-facing notifications communicate important information to users of your app, regardless of whether your app is running on the user’s device. For example, a sports app can let the user know when their favorite team scores. Notifications can also tell your app to download information and update its interface. Notifications can display an alert, play a sound, or badge the app’s icon.

You can generate notifications locally from your app or remotely from a server that you manage. For local notifications, the app creates the notification content and specifies a condition, like a time or location, that triggers the delivery of the notification. For remote notifications, your company’s server generates push notifications, and Apple Push Notification service (APNs) handles the delivery of those notifications to the user’s devices.

Use this framework to do the following:

  • Define the types of notifications that your app supports.
  • Define any custom actions associated with your notification types.
  • Schedule local notifications for delivery.
  • Process already delivered notifications.
  • Respond to user-selected actions.

The system makes every attempt to deliver local and remote notifications in a timely manner, but delivery isn’t guaranteed. The PushKit framework offers a more timely delivery mechanism for specific types of notifications, such as those VoIP and watchOS complications use. For more information, see PushKit.

For webpages in Safari version 16.0 and higher, generate remote notifications from a server that you manage using Push API code that works in Safari and other browsers.

A framework for training any-to-any multimodal foundation models. Scalable. Open-sourced. Across tens of modalities and tasks.

4M enables training versatile multimodal and multitask models, capable of performing a diverse set of vision tasks out of the box, as well as being able to perform multimodal conditional generation. This, coupled with the models' ability to perform in-painting, enables powerful image editing capabilities. These generalist models transfer well to a broad range of downstream tasks or to novel modalities, and can be easily fine-tuned into more specialized variants of itself.

Build sophisticated animations that you control using phase and keyframe animators.

SwiftUI provides a collection of useful animations that you can use in your app. These animations help enhance the user experience of your app by providing visual transitions of views and user interface elements. While these standard animations provide a great way to enhancement the user interaction of your app, there are times when you need to have more control over the timing and movement of a visual element. PhaseAnimator and KeyframeAnimator help give you that control.

A phase animator allows you to define an animation as a collection of discrete steps called phases. The animator cycles through these phases to create a visual transition. With keyframe animator, you create keyframes that define animation values at specific times during the visual transition.

A container that animates its content by automatically cycling through a collection of phases that you provide, each defining a discrete step within an animation.

Use one of the phase animator view modifiers like phaseAnimator(_:content:animation:) to create a phased animation in your app.

Welcome to The Valley of Code. Your journey in Web Development starts here. In the fundamentals section you'll learn the basic building blocks of the Internet, the Web and how its fundamental protocol (HTTP) works.

I finally have the feeling that I’m a decent programmer, so I thought it would be fun to write some advice with the idea of “what would have gotten me to this point faster?” I’m not claiming this is great advice for everyone, just that it would have been good advice for me.

Make your iOS app launch experience faster and more responsive by customizing a launch screen.

Every iOS app must provide a launch screen, a screen that displays while your app launches. The launch screen appears instantly when your app starts up and is quickly replaced with the app’s first screen.

You create a launch screen for your app in your Xcode project in one of two ways:

  • Information property list
  • User interface file

To make the app launch experience as seamless as possible, create a launch screen with basic views that closely resemble the first screen of your app.

For guidelines about designing a launch screen, see Launching in the Human Interface Guidelines.

Apply this attribute to a declaration, to suppress strict concurrency checking. You can apply this attribute to the following kinds of declarations: >

  • Imports
  • Structures, classes, and actors
  • Enumerations and enumeration cases
  • Protocols
  • Variables and constants
  • Subscripts
  • Initializers
  • Functions
>

Returning an opaque type looks very similar to using a boxed protocol type as the return type of a function, but these two kinds of return type differ in whether they preserve type identity. An opaque type refers to one specific type, although the caller of the function isn’t able to see which type; a boxed protocol type can refer to any type that conforms to the protocol. Generally speaking, boxed protocol types give you more flexibility about the underlying types of the values they store, and opaque types let you make stronger guarantees about those underlying types.

The sonos integration allows you to control your Sonos wireless speakers from Home Assistant. It also works with IKEA Symfonisk speakers.

Hide implementation details about a value’s type.

Swift provides two ways to hide details about a value’s type: opaque types and boxed protocol types. Hiding type information is useful at boundaries between a module and code that calls into the module, because the underlying type of the return value can remain private.

A function or method that returns an opaque type hides its return value’s type information. Instead of providing a concrete type as the function’s return type, the return value is described in terms of the protocols it supports. Opaque types preserve type identity — the compiler has access to the type information, but clients of the module don’t.

A boxed protocol type can store an instance of any type that conforms to the given protocol. Boxed protocol types don’t preserve type identity — the value’s specific type isn’t known until runtime, and it can change over time as different values are stored.

What exactly makes code “unsafe”? Join the Swift team as we take a look at the programming language's safety precautions — and when you might need to reach for unsafe operations. We'll take a look at APIs that can cause unexpected states if not used correctly, and how you can write code more specifically to avoid undefined behavior. Learn how to work with C APIs that use pointers and the steps to take when you want to use Swift's unsafe pointer APIs. To get the most out of this session, you should have some familiarity with Swift and the C programming language. And for more information on working with pointers, check out "Safely Manage Pointers in Swift".

Come with us as we delve into unsafe pointer types in Swift. Discover the requirements for each type and how to use it correctly. We'll discuss typed pointers, drop down to raw pointers, and finally circumvent pointer type safety entirely by binding memory. This session is a follow-up to "Unsafe Swift" from WWDC20. To get the most out of it, you should be familiar with Swift and the C programming language.

The question you're asking for is if a type is trivially copyable and destroyable, which in practice is the case iff the type does not contain any reference types or existentials. There's an _isPOD() (Warning: underscored API!) entry point in the stdlib for this purpose:

print(_isPOD(Int.self)) // true
print(_isPOD(Array<Int>.self)) // false

Provide app continuity for users by preserving their current activities.

This SwiftUI sample project demonstrates how to preserve your appʼs state information and restore the app to that previous state on subsequent launches. During a subsequent launch, restoring your interface to the previous interaction point provides continuity for the user, and lets them finish active tasks quickly.

When using your app, the user performs actions that affect the user interface. For example, the user might view a specific page of information, and after the user leaves the app, the operating system might terminate it to free up the resources it holds. The user can return to where they left off — and UI state restoration is a core part of making that experience seamless.

This sample app demonstrates the use of state preservation and restoration for scenarios where the system interrupts the app. The sample project manages a set of products. Each product has a title, an image, and other metadata you can view and edit. The project shows how to preserve and restore a product in its DetailView.

This page is a collection of my favorite resources for people getting started writing programming languages. I hope to keep it updated as long as I continue to find great stuff.

June

Creating a tvOS media catalog app in SwiftUI

This sample code project shows how to create the standard content lockups for tvOS, and provides best practices for building out rows of content shelves. It also includes examples for product pages, search views, and tab views, including the new sidebar adaptive tab view style that provides a sidebar in tvOS.

The sample project contains the following examples:

  • StackView implements an example landing page for a content catalog app, defining several shelves with a showcase or hero header area above them. It also gives an example of an above- and below-the-fold switching animation.
  • ButtonsView provides a showcase of the various button styles available in tvOS.
  • DescriptionView provides an example of how to build a product page similar to those you see on the Apple TV app, with a custom material blur.
  • SearchView shows an example of a simple search page using the searchable(text:placement:prompt:) and searchSuggestions(\_:) modifiers.
  • SidebarContentView shows how to make a sectioned sidebar using the new tab bar APIs in tvOS 18.
  • HeroHeaderView gives an example of creating a material gradient to blur content in a certain area, fading it into unblurred content.

Adds an action to be called when the view crosses the threshold to be considered on/off screen.

Positions this view within an invisible frame with a size relative to the nearest container.

Use this modifier to specify a size for a view’s width, height, or both that is dependent on the size of the nearest container. Different things can represent a “container” including:

  • The window presenting a view on iPadOS or macOS, or the screen of a device on iOS.
  • A column of a NavigationSplitView
  • A NavigationStack
  • A tab of a TabView
  • A scrollable view like ScrollView or List

The size provided to this modifier is the size of a container like the ones listed above subtracting any safe area insets that might be applied to that container.

Creates an environment values, transaction, container values, or focused values entry.

Make binaries available to other developers by creating Swift packages that include one or more XCFrameworks.

Creating a Swift package to organize and share your code makes source files available to developers who use the Swift package as a package dependency. However, you may need to make your code available as binaries to protect your intellectual property — for example, if you’re developing proprietary, closed-source libraries.

Carefully consider whether you want to distribute your code in binary form because doing so comes with drawbacks. For example, a Swift package that contains a binary is less portable because it can only support platforms that its included binaries support. In addition, binary dependencies are only available for Apple platforms, which limits the audience for your Swift package.

To use the Montreal subway (the Métro), you tap a paper ticket against the turnstile and it opens. The ticket works through a system called NFC, but what's happening internally? How does the ticket work without a battery? How does it communicate with the turnstile? And how can it be so cheap that you can throw the ticket away after one use? To answer these questions, I opened up a ticket and examined the tiny chip inside.

If the standard OpenType shaping engine doesn't give you enough flexibility, Harfbuzz allows you to write your own shaping engine in WebAssembly and embed it into your font! Any font which contains a Wasm table will be passed to the WebAssembly shaper.

How to use a .xcconfig file and a .plist with a Swift Package Manager based project.

The goal is to explore differentiable programming in realistic settings. If autodiff + vectorization is the future, then it is important to be able to write hard programs in a differentiable style (beyond just another Transformer).

Adds an action to be performed when a value, created from a geometry proxy, changes.

The geometry of a view can change frequently, especially if the view is contained within a ScrollView and that scroll view is scrolling.

You should avoid updating large parts of your app whenever the scroll geometry changes. To aid in this, you provide two closures to this modifier:

  • transform: This converts a value of GeometryProxy to your own data type.
  • action: This provides the data type you created in of and is called whenever the data type changes.

For example, you can use this modifier to know how much of a view is visible on screen. In the following example, the data type you convert to is a Bool and the action is called whenever the Bool changes.

ScrollView(.horizontal) {
>     LazyHStack {
>         ForEach(videos) { video in
>             VideoView(video)
>         }
>     }

}

struct VideoView: View { > var video: VideoModel

var body: some View {
>         VideoPlayer(video)
>             .onGeometryChange(for: Bool.self) { proxy in
>                 let frame = proxy.frame(in: .scrollView)
>                 let bounds = proxy.bounds(of: .scrollView) ?? .zero
>                 let intersection = frame.intersection(
>                     CGRect(origin: .zero, size: bounds.size))
>                 let visibleHeight = intersection.size.height
>                 return (visibleHeight / frame.size.height) > 0.75
>             } action: { isVisible in
>                 video.updateAutoplayingState(
>                     isVisible: isVisible)
>             }
>     }

}


Extend your media viewing experience using Reality Composer Pro components like Docking Region, Reverb, and Virtual Environment Probe. Find out how to further enhance immersion using Reflections, Tint Surroundings Effect, SharePlay, and the Immersive Environment Picker.

Join us on a tour of SwiftUI, Apple's declarative user interface framework. Learn essential concepts for building apps in SwiftUI, like views, state variables, and layout. Discover the breadth of APIs for building fully featured experiences and crafting unique custom components. Whether you're brand new to SwiftUI or an experienced developer, you'll learn how to take advantage of what SwiftUI has to offer when building great apps.

C++ interoperability is a new feature in Swift 5.9. A great variety of C++ APIs can be called directly from Swift, and select Swift APIs can be used from C++.

This document is the reference guide describing how to mix Swift and C++. It describes how C++ APIs get imported into Swift, and provides examples showing how various C++ APIs can be used in Swift. It also describes how Swift APIs get exposed to C++, and provides examples showing how the exposed Swift APIs can be used from C++.

C++ interoperability is an actively evolving feature of Swift. It currently supports interoperation between a subset of language features. The status page provides an overview of the currently supported interoperability features, and lists the existing constraints as well.

Future releases of Swift might change how Swift and C++ interoperate, as the Swift community gathers feedback from real world adoption of C++ interoperability in mixed Swift and C++ codebases. Please provide the feedback that you have on the Swift forums, or by filing an issue on GitHub. Future changes to the design or functionality of C++ interoperability will not break code in existing codebases by default.


Neovim is a modern reimplementation of Vim, a popular terminal-based text editor. Neovim adds new features like asynchronous operations and powerful Lua bindings for a snappy editing experience, in addition to the improvements Vim brings to the original Vi editor.

This article walks you through configuring Neovim for Swift development, providing configurations for various plugins to build a working Swift editing experience. It is not a tutorial on how to use Neovim and assumes some familiarity with modal text editors like Neovim, Vim, or Vi. We are also assuming that you have already installed a Swift toolchain on your computer. If not, please see the Swift installation instructions.

Although the article references Ubuntu 22.04, the configuration itself works on any operating system where a recent version of Neovim and a Swift toolchain is available.

Basic setup and configuration includes:

  1. Installing Neovim.
  2. Installing lazy.nvim to manage our plugins.
  3. Configuring the SourceKit-LSP server.
  4. Setting up Language-Server-driven autocompletion with nvim-cmp.
  5. Setting up snippets with LuaSnip.
>

Guarantee your code is free of data races by enabling the Swift 6 language mode.

Learn how new cross-platform APIs in RealityKit can help you build immersive apps for iOS, macOS, and visionOS. Check out the new hover effects, lights and shadows, and portal crossing features, and view them in action through real examples.

Apple has released Embedded Swift, a subset of the Swift language, bringing Swift to both Arm and RISC-V microcontrollers.

f you want to go spelunking in SwiftUI’s .swiftinterface file (people have found interesting things in there in past years), note that there’s a new SwiftUICore.framework this year, so now there’s two files to check.

/Applications/Xcode-16.0b1.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/System/Library/Frameworks/SwiftUICore.framework/Modules/SwiftUICore.swiftmodule/arm64-apple-ios.swiftinterface

Arrange spatial Personas in a team-based guessing game

Use low-level mesh and texture APIs to achieve fast updates to a person’s brush strokes by integrating RealityKit with ARKit and SwiftUI.

Use attachments to place 2D content relative to 3D content in an immersive space.

Use this code to follow along with a guide to migrating your code to take advantage of the full concurrency protection that the Swift 6 language mode provides.

This sample provides two separate versions of the app:

  • The original version uses Swift concurrency features but contains a number of issues that are detected by enabling Swift complete concurrency checking and that need to be resolved before enabling the Swift 6 language mode.
  • The updated version resolves these issues and has enabled Swift 6. It also adds new features that record the location of the user when they log that they drank coffee.

Watch the session to see the process step by step, and then compare the two projects to see the differences.

Add scroll effects, rich color treatments, custom transitions, and advanced effects using shaders and a text renderer.

Add a deeper level of immersion to media playback in your app with RealityKit and Reality Composer Pro.

visionOS provides powerful features for building immersive media playback apps. It supports playing 3D video and Spatial Audio, which helps bring the content to life and makes the viewer feel like they’re part of the action. Starting in visionOS 2, you can take your app’s playback experience even further by creating custom environments using RealityKit and Reality Composer Pro. The Destination Video sample includes a custom environment, Studio. The Studio environment provides a large, open space that’s specifically designed to provide an optimal media viewing experience, as shown in the following image.

Create a more immersive experience by adding video reflections in a custom environment.

RealityKit and Reality Composer Pro provide the tools to build immersive media viewing environments in visionOS. The Destination Video sample uses these features to build a realistic custom environment called Studio. The environment adds to its realism and makes the video player feel grounded in the space by applying reflections of the player’s content onto the surfaces of the scene.

RealityKit and Reality Composer Pro support two types of video reflections:

  • Specular reflections provide a direct reflection of the video content, and are typically useful to apply to glossy surfaces like metals and water.
  • Diffuse reflections provide a softer falloff of video content, and are useful to apply to rougher, more organic surfaces.

This article describes how to adopt reflections in your own environment, and shows how Destination Video’s Studio environment supports these effects to create a compelling media viewing experience.

A native macOS app for App Store Connect that streamlines app updates and releases, making the process faster and easier.

Here’s how to do it on an Apple Silicon Mac: >

  1. Backup using Time Machine
  2. Create a new APFS volume
  3. Shut down Mac
  4. Start up and keep holding down the power button
  5. Select “Options”
  6. Then choose to reinstall Sonoma onto the volume from step 2.
  7. Wait a while (it said 5h for me, but took <1h)
  8. When it’s installed, probably best to not log into iCloud (though I did, and then disabled all the various sync options) and skip migrating your previous user account
  9. Then open to System Settings, enable beta updates, and update that install to Sequoia
  10. File feedback to Apple!
>

Highlights of new technologies introduced at WWDC24.

Browse a selection of documentation for new technologies and frameworks introduced at WWDC24. Many existing frameworks have added significant functionality, and you’ll find new ways to enhance your apps targeting the latest platform release.

For a comprehensive list of downloadable sample code projects, see WWDC24 Sample Code. For the latest design guidance localized in multiple languages, see Human Interface Guidelines > What’s New.

Learn how to adopt spatial photos and videos in your apps. Explore the different types of stereoscopic media and find out how to capture spatial videos in your iOS app on iPhone 15 Pro. Discover the various ways to detect and present spatial media, including the new QuickLook Preview Application API in visionOS. And take a deep dive into the metadata and stereo concepts that make a photo or video spatial.

Finally, let’s apply a similar trick to the question of whether we’re running Xcode 15 or later. For this I am also leaning on an example I found in the WebKit sources. By declaring boolean values for several Xcode version tests:

XCODE_BEFORE_15_1300 = YES
XCODE_BEFORE_15_1400 = YES
XCODE_BEFORE_15_1500 = NO

We lay the groundwork for expanding a build setting based on the XCODE_VERSION_MAJOR build setting, which is built in:

XCODE_BEFORE_15 = \((XCODE_BEFORE_15_\)(XCODE_VERSION_MAJOR))
XCODE_AT_LEAST_15 = \((NOT_\)(XCODE_BEFORE_15))

In this case, on my Mac running Xcode 15.1, XCODE_BEFORE_15 expands to XCODE_BEFORE_15_1500, which expands to NO. XCODE_AT_LEAST_15 uses the aforementioned NOT_ setting, expanding to NOT_NO, which expands to YES.

Specify your project’s build settings in plain-text files, and supply different settings for debug and release builds.

A build configuration file is a plain-text file you use to specify the build settings for a specific target or your entire project. Build configuration files make it easier to manage build settings yourself, and to change build settings automatically for different architectures and platforms. With a build configuration file, you place only the settings you want to modify in a text file. You can create multiple files, each with different combinations of build settings, and you can change the settings quickly for your target or project. Xcode layers your settings on top of other project-related settings to create the final build configuration.

Build configuration files are particularly useful in the following situations:

  • You want different build settings based on the current platform, architecture, or build type.
  • You want to store build settings in a way that is easier to inspect.
  • You want to edit build settings outside of Xcode.

For more information about how build configuration files integrate with your project’s other settings values, see Configuring the build settings of a target.

The MagicReplace option is automatically applied to the replace symbol effect when possible, and it works specifically with related SF Symbols. This feature is particularly useful for tappable elements in our apps, such as various toggles.

Discover how to create stunning visual effects in SwiftUI. Learn to build unique scroll effects, rich color treatments, and custom transitions. We'll also explore advanced graphic effects using Metal shaders and custom text rendering.

A two-dimensional gradient defined by a 2D grid of positioned colors.

Each vertex has a position, a color and four surrounding Bezier control points (leading, top, trailing, bottom) that define the tangents connecting the vertex with its four neighboring vertices. (Vertices on the corners or edges of the mesh have less than four neighbors, they ignore their extra control points.) Control points may either be specified explicitly or implicitly.

When rendering, a tessellated sequence of Bezier patches are created, and vertex colors are interpolated across each patch, either linearly, or via another set of cubic curves derived from how the colors change between neighbors – the latter typically gives smoother color transitions.

Discover how Swift balances abstraction and performance. Learn what elements of performance to consider and how the Swift optimizer affects them. Explore the different features of Swift and how they're implemented to further understand the tradeoffs available that can impact performance.

Dive into the basis for your app's dynamic memory: the heap! Explore how to use Instruments and Xcode to measure, analyze, and fix common heap issues. We'll also cover some techniques and best practices for diagnosing transient growth, persistent growth, and leaks in your app.

Get started with noncopyable types in Swift. Discover what copying means in Swift, when you might want to use a noncopyable type, and how value ownership lets you state your intentions clearly.

Measure CPU and GPU utilization to find ways to improve your app’s performance.

You use the RealityKit framework to add 3D content to an ARKit app. The framework runs an entity component system (ECS) on the CPU to manage tasks like physics calculations, animations, audio processing, and network synchronization. It also relies on the Metal framework and GPU hardware to perform multithreaded rendering.

Although RealityKit handles much of the complexity of this system for you, it’s still important to optimize your app for performance. Use debugging features built in to RealityKit — along with standard tools like Xcode and Instruments — to pinpoint the causes of reduced frame rate. Then make data-driven adjustments to your assets or to the way you use the framework to improve performance.

Learn how to use LLDB to explore and debug codebases. We'll show you how to make the most of crashlogs and backtraces, and how to supercharge breakpoints with actions and complex stop conditions. We'll also explore how the “p” command and the latest features in Swift 6 can enhance your debugging experience.

Meet the RealityKit debugger and discover how this new tool lets you inspect the entity hierarchy of spatial apps, debug rogue transformations, find missing entities, and detect which parts of your code are causing problems for your systems.

Explore how builds are changing in Xcode 16 with explicitly built modules. Discover how modules are used to build your code, how explicitly built modules improve transparency in compilation tasks, and how you can optimize your build by sharing modules across targets.

Learn about the capabilities of SwiftUI container views and build a mental model for how subviews are managed by their containers. Leverage new APIs to build your own custom containers, create modifiers to customize container content, and give your containers that extra polish that helps your apps stand out.

Build a multiplatform app that uses windows, volumes, and animations to create a robot botanist’s greenhouse.

BOT-anist is a game-like experience where you build a custom robot botanist by selecting from a variety of color and shape options, and then guide your robot around a futuristic greenhouse to plant alien flowers. This app demonstrates how to build an app for visionOS, macOS, iOS, and iPadOS using a single shared Xcode target and a shared Reality Composer Pro project.

This sample shows off a number of RealityKit and visionOS features, including volume ornaments, dynamic lights and shadows, animation library components, and vertex animation using blend shapes. It also demonstrates how to set a volume’s default size and enable user resizing of volumes.

Discover powerful new ways to customize volumes and immersive spaces in visionOS. Learn to fine-tune how volumes resize and respond to people moving around them. Make volumes and immersive spaces interact through the power of coordinate conversions. Find out how to make your app react when people adjust immersion with the Digital Crown, and use a surrounding effect to dynamically customize the passthrough tint in your immersive space experience.

Learn how to create great single and multi-window apps in visionOS, macOS, and iPadOS. Discover tools that let you programmatically open and close windows, adjust position and size, and even replace one window with another. We'll also explore design principles for windows that help people use your app within their workflows.

Access iCloud from macOS guest virtual machines.

In macOS 15 and later, Virtualization supports access to iCloud accounts and resources when running macOS in a virtual machine (VM) on Apple silicon. When you create a VM in macOS 15 from a macOS 15 software image (an .ipsw file) using a VZMacHardwareModel that you obtain from a VZMacOSRestoreImage, Virtualization configures an identity for the VM that it derives from security information in the host’s Secure Enclave. Just as individual physical devices have distinct identities based on their Secure Enclaves, this identity is distinct from other VMs.

If someone moves a VM to a different Mac host and restarts it, the Virtualization framework automatically creates a new identity for the VM using the information from the Secure Enclave of the new Mac host. This identity change requires the person using the VM to reauthenticate to allow iCloud to restart syncing data to the VM.

Additionally, the Virtualization framework detects attempts to start multiple copies of the same VM simultaneously on the same Mac host. For example, when someone duplicates the files that make up a VM, the framework treats the copy of the VM as a clone of the first one. Starting a second clone while another clone is already running causes the Virtualization framework to automatically construct a new identity for the second clone. This preserves the integrity that different VMs have distinct identities, and requires that the person using the VM reauthenticate to use iCloud services.

A value that can replace the default text view rendering behavior.

An object for controlling video experiences.

Use this class to control, observe, and respond to experience changes for an AVPlayerViewController. AVPlayerViewController’s presentation APIs will no longer be honored once an AVExperienceController is attached. Using those presentation APIs may preclude use of AVExperienceController.

An object to manage viewing multiple videos at once.

Clipboard manager for macOS which does one job - keep your copy history at hand. Period.

Lightweight. Open source. No fluff.

Super Simple Streaming For 75% Less

Stream 8K+ resolution videos without any encoding or packaging costs. Spend your time on creativity instead of complexity.

Watch YouTube in your theater

Welcome to Theater: the most immersive way to watch YouTube, your media files, and even spatial livestreamed events in a tastefully designed movie theater with immersive sound, multiplex scale, and yes, even your friends can join you to watch together in SharePlay.

Add information to declarations and types.

There are two kinds of attributes in Swift — those that apply to declarations and those that apply to types. An attribute provides additional information about the declaration or type. For example, the discardableResult attribute on a function declaration indicates that, although the function returns a value, the compiler shouldn’t generate a warning if the return value is unused.

You specify an attribute by writing the @ symbol followed by the attribute’s name and any arguments that the attribute accepts:

@<#attribute name#>
@<#attribute name#>(<#attribute arguments#>)

Some declaration attributes accept arguments that specify more information about the attribute and how it applies to a particular declaration. These attribute arguments are enclosed in parentheses, and their format is defined by the attribute they belong to.

A type whose values can safely be passed across concurrency domains by copying.

What is an effect, anyway? In this hypothetical language (which I’ll call effecta because it sounds cool), an effect should be any change to any state. This sounds generic, but a generic foundation allows us to apply a question to everything we do to make sure it respects the symmetries of our system. Our question in this case will be: “is this thing an effect?”, and the answer will be “it is an effect if and only if it is a change to some state”.

This proposal allows the compiler to model how values are accessed such that it can be much less conservative about sendability. This modeling takes into account control statements like if and switch as it tracks values. The implementation is deep and, unsurprisingly, very sophisticated. I’m not going to get into too much of the details, instead I’m going to quote the proposal directly: >

The compiler will allow transfers of non-Sendable values between isolation domains where it can prove they are safe and will emit diagnostics when it cannot at potential concurrent access points so that programmers don’t have to reason through the data flow themselves.

Build efficient custom worlds for your app.

You can implement immersive environments for your app that people can fade in and out using the Digital Crown, just like the provided system environments. However, custom immersive environments can cause performance and thermal problems if you’re not careful about how you build them. This article describes ways to address these potential problems, and the sample provides a demonstration of some of these methods in action.

In simple words, the problem is that the linker overoptimizes the binary removing symbols that are needed at runtime. The linker’s dead-stripping logic can’t delete dynamically referenced symbols. And this is something that happens not only when referencing Objective-C symbols, but Swift too. For example, when integrating Composable Architecture, which uses Objective-C runtime capabilities, developers might need to add explicit references to those symbols or add the aforementioned flags to the build settings.

This post is the first of a series called “pondering about what we could do better in the programming world because I have time to waste”. In this post I would like to introduce the idea of a static effect system and how it could be beneficial to programming languages moving forward.

Git Credential Manager (GCM) is another way to store your credentials securely and connect to GitHub over HTTPS. With GCM, you don't have to manually create and store a personal access token, as GCM manages authentication on your behalf, including 2FA (two-factor authentication).

Describe your data up front and generate schemas, API specifications, client / server code, docs, and more.

If you want SwiftUI to reinitialize a state object when a view input changes, make sure that the view’s identity changes at the same time. One way to do this is to bind the view’s identity to the value that changes using the id(_:) modifier.

From Apple Doc, >

SwiftUI only initializes a state object the first time you call its initializer in a given view. This ensures that the object provides stable storage even as the view’s inputs change. However, it might result in unexpected behavior or unwanted side effects if you explicitly initialize the state object.

How to get the most out of Xcode Previews

I like using previews as a sort of story-book-like feature. Whenever I create a new component on my Components/CoreUI modules, I create a Previews view. It’s just a simple Form with two sections:

  1. Component: This is where the real component is displayed.
  2. Configuration: A set of LabeledContents with customization options for the component (texts, toggles, pickers, etc).

It’s pretty easy to do, and it gives a quick glance at how the component looks and feels.

May

As is generally known, SwiftUI hands off some of its work to a private framework called AttributeGraph. In this article we will explore how SwiftUI uses that framework to efficiently update only those parts of an app necessary and to efficiently get the data out of your view graph it needs for rendering your app.

In one of my iOS apps, I have recently faced a problem where I had to efficiently look up locations that are geographically close to a specified point. As the naive approach, including computing a distance between dozens of point pairs, seem not so efficient to me — I made a little research and gave a try to Apple-provided R-tree implementation from GameKit.

A structure that calculates k-nearest neighbors.

An object that starts and manages headphone motion services.

This class delivers headphone motion updates to your app. Use an instance of the manager to determine if the device supports motion, and to start and stop updates. Adopt the CMHeadphoneMotionManagerDelegate protocol to receive and respond to motion updates. Before using this class, check isDeviceMotionAvailable to make sure the feature is available.

One workaround is to rely on environment variables in your Package.swift

// swift-tools-version:5.7
// The swift-tools-version declares the minimum version of Swift required to build this package.

import Foundation
import PackageDescription

let mocksEnabled = ProcessInfo.processInfo.environment["MOCKS"] != "NO"

let package = Package(
name: "MyPackage",
defaultLocalization: "en",
dependencies: [
],
targets: [
// your targets
]
)

package.targets.forEach { target in
guard target.type == .regular else { return }

    var settings = target.swiftSettings ?? []

A fun side project for a great cause featuring Core Motion, SwiftUI, a little help from AI, and a pair of AirPods to count 100 push-ups a day.

Entering a new platform only happens a few times in a developers life. It is a rare and delicious event, when you step in the realm of something genuenly new. If you are fast, you can feel yourself like the explorers in old times. Everything is new, and flexible; the new platform doesn’t yet have estabilished patterns, which gives you plenty of space to experiment.

Optimize text readability in visionOS leveraging font, color, and vibrancy

visionOS introduces a new layer to typography, where spatial considerations play a crucial role. Unlike traditional displays, text needs to be legible from varying distances and contexts. Font size and weight become the main factors in establishing a clear typographic hierarchy that remains legible across varying distances and contexts.

I’ve found what I believe to be a bug, or at least deeply disappointing behavior, in Xcode’s treatment of SwiftUI previews. I’ll put an explanation together in the paragraphs that follow, but the TL;DR is: I think you’ll probably want to start wrapping all your SwiftUI Previews and Preview Content Swift source code in #if DEBUG active compilation condition checks.

When developing a public API, we often reach the point where we would like different clients of our interface to consume either experimental features under development, or to tailor specific methods for them that we would not like other clients to use.

Swift's @_spi (System Programming Interface) attribute offers a solution by allowing developers to define subsets of an API targeted at specific clients, effectively hiding them from unintended users.

The front-end to your dev env

Pronounced "MEEZ ahn plahs"

An interactive study of common retry methods

Provide the localizable files from your project to localizers.

Export localizations for the languages and regions you’re ready to support. You can export all the files that you need to localize from your Xcode project, or export the files for specific localizations. Optionally, add files to the exported folders to provide context, and then give the files to localizers.

Setting name: SWIFT_EMIT_LOC_STRINGS

When enabled, the Swift compiler will be used to extract Swift string literal and interpolation LocalizedStringKey and LocalizationKey types during localization export.

As I've previously blogged in Pure Rust Implementation of Apple Code Signing (2021-04-14) and Expanding Apple Ecosystem Access with Open Source, Multi Platform Code signing (2022-04-25), I've been hacking on an open source implementation of Apple code signing and notarization using the Rust programming language. This takes the form of the apple-codesign crate / library and its rcodesign CLI executable. (Documentation / GitHub project / crates.io ).

The git rerere functionality is a bit of a hidden feature. The name stands for “reuse recorded resolution” and, as the name implies, it allows you to ask Git to remember how you’ve resolved a hunk conflict so that the next time it sees the same conflict, Git can resolve it for you automatically.

Collaboratively editing strings of text is a common desire in peer-to-peer applications. For example, a note-taking app might represent each document as a single collaboratively-edited string of text.

The algorithm presented here is one way to do this. It comes from a family of algorithms called CRDTs, which I will not describe here. It's similar to the approaches taken by popular collaborative text editing libraries such as Yjs and Automerge. Other articles have already been written about these similar approaches (see the references section below), but this article also has a nice interactive visualization of what goes on under the hood.

A frustrating aspect of the new MacBook Pro models is the notch. The notch itself isn't the problem; rather, it's that Apple hasn't automatically adjusted the menu bar icons so they don't hide behind the notch when many apps are running.

My colleagues often suggest purchasing Bartender for about 20€ to solve this issue. While it offers many features, I've refused to pay for a 3rd party solution to Apple's poor design decision. I have nothing against Bartender but I just don’t want to install yet another app into my machine to solve such a simple problem.

Recently, I discovered a free, built-in macOS workaround that doesn't require installing Bartender or any other additional apps.

>

Change the whitespace settings value

defaults -currentHost write -globalDomain NSStatusItemSelectionPadding -int 6 defaults -currentHost write -globalDomain NSStatusItemSpacing -int 6

After running these commands, you need to log out and log back in


Expand the market for your app by supporting multiple languages and regions.

Localization is the process of translating and adapting your app into multiple languages and regions. Localize your app to provide access for users who speak a variety of languages, and who download from different App Store territories.

First, internationalize your code with APIs that automatically format and translate strings correctly for the language and region. Then add support for content that includes plural nouns and verbs by following language plural rules to increase the accuracy of your translations.

Use a string catalog to translate text, handle plurals, and vary the text your app displays on specific devices.

Your app delivers the best experience when it runs well in a person’s locale and displays content in their native language. Supporting multiple languages is more than translating text. It includes handling plurals for nouns and units, as well as displaying the right form of text on specific devices.

Use a string catalog to localize and translate all your app’s text in a visual editor right in Xcode. A string catalog automatically tracks all the localizable strings from your code, and keeps your translations in one place.

Use string catalogs to host translations, configure pluralization messages for different regions and locales, and change how text appears on different devices.

A specialized view that creates, configures, and displays Metal objects.

The MTKView class provides a default implementation of a Metal-aware view that you can use to render graphics using Metal and display them onscreen. When asked, the view provides a MTLRenderPassDescriptor object that points at a texture for you to render new contents into. Optionally, an MTKView can create depth and stencil textures for you and any intermediate textures needed for antialiasing. The view uses a CAMetalLayer to manage the Metal drawable objects.

The view requires a MTLDevice object to manage the Metal objects it creates for you. You must set the device property and, optionally, modify the view’s drawable properties before drawing.

Hey, we’re the makers of Clerk and Nextjournal! 👋

We’re building application.garden, a platform for hosting small web applications written in Clojure.

You can read more about what it will be able to do in the application.garden docs. A lot of this is still in flux though, so check back regularily and be surprised! ✨

This article documents several techniques I have found effective at improving the run time performance of Swift applications without resorting to “writing C in .swift files”. (That is, without resorting to C-like idioms and design patterns.) It also highlights a few pitfalls that often afflict Swift programmers trying to optimize Swift code.

These tips are relevant as of version 5.5 of the Swift compiler. The only reason I say this is because a few of the classical boogeymen in the Swift world, like “Objective-C bridging” and “reference counting overhead” are no longer as important as they once were.

For an introduction and motivation into Embedded Swift, please see "A Vision for Embedded Swift", a Swift Evolution document highlighting the main goals and approaches.

The following document explains how to use Embedded Swift's support in the Swift compiler and toolchain.

SE-0421: Generalize effect polymorphism for AsyncSequence and AsyncIteratorProtocol

Have you ever wanted to use some AsyncSequence? I certainly have. The inability to hide the implementation type of an AsyncSequence is an enormous pain. It is particularly problematic when trying to replace Combine with AsyncAlgorithms. There are some libraries out there that help, but I’d really like this problem to just disappear.

a (very opinionated) tiny companion for your personal project

This article is a partial-rebuttal/partial-confirmation to KGOnTech’s Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3, prompted by RoadToVR’s Quest 3 Has Higher Effective Resolution, So Why Does Everyone Think Vision Pro Looks Best? which cites KGOnTech. I suppose it’s a bit late, but it’s taken me a while to really get a good intuition for how visionOS renders frames, because there is a metric shitton of nuance and it’s unfortunately very, very easy to make mistakes when trying to quantify things.

This post is divided into two parts: Variable Rasterization Rate (VRR) and how visionOS renders frames (including hard numbers for internal render resolutions and such), and a testbench demonstrating why photographing the visual clarity of Vision Pro (and probably future eye tracked headsets) may be more difficult than a DSLR pointed into the lenses (and how to detect the pitfalls if you try!)

Wasmphobia analyzes a WebAssembly file and gives you a breakdown of what contributed to the module’s size. This is only really useful when the WebAssembly binary has DWARF debugging data embedded.

SE-0420: Inheritance of actor isolation

Swift’s concurrency system seems incredibly simple at first. But, eventually, we all discover that there’s actually a tremendous amount of learning required to use concurrency successfully. And, one of the most challenging things is there’s also quite a bit to unlearn too. Swift concurrency has many features that feel familiar, but actually work very differently.

In this livestream we discuss all things app architecture! This includes the risks of bringing in 3rd party libraries, how TCA compares to other styles of building apps, the future of TCA, dependency management, and a whole bunch more.

Your first step toward developing for Apple platforms.

Pathways are simple and easy-to-navigate collections of the videos, documentation, and resources you’ll need to start building great apps and games. They’re the perfect place to begin your Apple developer journey — all you need is a Mac and an idea.

The SyncUps application is a recreation of one of Apple’s more interesting demo applications, Scrumdinger. We recreate it from scratch using the Composable Architecture, with a focus on domain modeling, controlling dependencies, and testability.

Matter Casting consists of three parts: >

  • The mobile app: For most content providers, this would be your consumer-facing mobile app. By making your mobile app a Matter "Casting Client", you enable the user to discover casting targets, cast content, and control casting sessions. The example Matter tv-casting-app for Android / iOS and Linux builds on top of the Matter SDK to demonstrate how a TV Casting mobile app works.
  • The TV content app: For most content providers, this would be your consumer-facing app on a Smart TV. By enhancing your TV app to act as a Matter "Content app", you enable Matter Casting Clients to cast content. The example Matter content-app for Android builds on top of the Matter SDK to demonstrate how a TV Content app works.
  • The TV platform app: The TV platform app implements the Casting Video Player device type and provides common capabilities around media playback on the TV such as play/pause, keypad navigation, input and output control, content search, and an implementation of an app platform as described in the media chapter of the device library specification. This is generally implemented by the TV manufacturer. The example Matter tv-app for Android builds on top of the Matter SDK to demonstrate how a TV platform app works.

This document describes how enable your Android and iOS apps to act as a Matter "Casting Client". This documentation is also designed to work with the example example Matter tv-casting-app samples so you can see the experience end to end.

Domain modeling plays a significant role in modern software design, and investing time and effort to mastering this skill will be worth your while. Learn to leverage Swift's expressive type system to create accurate and robust models tailored to solve problems in your domain.

Positions this view within an invisible frame with a size relative to the nearest container.

Use this modifier to specify a size for a view’s width, height, or both that is dependent on the size of the nearest container. Different things can represent a “container” including:

  • The window presenting a view on iPadOS or macOS, or the screen of a device on iOS.
  • A column of a NavigationSplitView
  • A NavigationStack
  • A tab of a TabView
  • A scrollable view like ScrollView or List

The size provided to this modifier is the size of a container like the ones listed above subtracting any safe area insets that might be applied to that container.

So you have a Swift Package Manager project, without an xcodeproj, and you launch Instruments, and try to profile something (maybe Allocations), and you receive the message “Required kernel recording resources are in use by another document.” But of course you don’t have any other documents open in Instruments and you’re at a loss, so you’ve come here. Welcome.

This package allows you to use various hidden SwiftUI features. Compatible with macOS 12.0+, iOS 15.0+

SE-0418: Inferring Sendable for methods and key path literals

This is a dense proposal, covering a lot of tricky stuff around the relationships between functions, key paths, and sendability. I’m going to go out on a limb here and say that the changes here won’t affect the majority of Swift users. However, the changes are still welcome!

Conveniently generate your app PrivacyInfo.xcprivacy file.

Starting on May 1st 2024 Apple requires all apps thet make use of certain APIs to declare this usage in a privacy manifest file. Since editing the file by hand is somewhat tedious, this site will help you generate the file instead so you just select which items you need to include and we do the rest!

Many yearn for the “good old days” of the web. We could have those good old days back — or something even better — and if anything, it would be easier now than it ever was.

System Log Analyzer

April

SwiftUI uses Dynamic Type to scale fonts based on the user's preferred text size (the size can be changed in the Settings app). At the moment of writing, Dynamic Type is not yet supported on macOS. When writing SwiftUI code, we can use the .font modifier to automatically set a dynamic type style, such as body, largeTitle or any of the other builtin styles. The system then chooses the appropriate font based on the user's settings.

While everyone who writes Swift code will use Swift Macros, not everyone should write their own Swift Macros. This book will help you determine whether writing Swift Macros is for you and show you how the best ways to make your own.

You'll create both freestanding and attached macros and get a feel for when you should and shouldn't create them, which sort of macro you should create, and how to use SwiftSyntax to implement them. Your macros will accept parameters when appropriate and will always include tests. You'll even learn to create helpful diagnostics for your macros and even FixIts.

Following Structured Concurrency was one of the best decisions Swift could have made when introducing concurrency into the language. The impact of that decision on all the code written with concurrency in mind can't be underestimated.

But the other day I needed a tool that, while allowing me to stay in a structured concurrency system, it internally could leverage unstructured techniques. The exact situation is not really relevant besides understanding that I have a system that needs to read a value from some storage with the quirk that the value may not be there yet and thus it should wait for it.

I want to follow the structured concurrency principles on the reading side. But we can’t implement this without escaping the confines of this structure. That's because reading the value is not what starts the work of generating it. Instead, it’s another independent subsystem, at another time, that will end up saving the value into the storage.

To accomplish this, we need a way to pause the execution and resume it when another system tells us the value is available.

In addition to using Apple’s convenient, safe, and secure in-app purchase system, apps on the App Store in the United States that offer in-app purchases can also use the StoreKit External Purchase Link Entitlement (US) to include a link to the developer’s website that informs users of other ways to purchase digital goods or services. To use the entitlement, you’ll need to submit a request, enable the entitlement in Xcode, and use required StoreKit APIs. Apple will review your app to ensure it complies with the terms and conditions of the entitlement, as well as the App Review Guidelines and the Apple Developer Program License Agreement.

Now it’s Ruby that’s 5 times faster than Crystal!!! And 20x faster than our original version. Though most likely that’s some cost from the FFI, or something similar, though that does seem like a surprising amount of overhead.

I thought it was notable that by making some minor tweaks to Ruby code it can now outperform a precompiled statically typed language in a purpose-built example of when it is slow. I’m hopeful that someday with future advancements in the Ruby JIT even the small tweaks might not be necessary.

This is a parody of the nLab, a wiki for collaborative work on category theory and higher category theory. As anyone who's visited is probably aware, the jargon can be absolutely impenetrable for the uninitiated — thus, the idea for this project was born!

Once you generate a page, you can link to it using the hash url displayed; loading the site with no hash or following any link in the body will get you a new random page!

Configure the session when a SharePlay activity starts, and handle events that occur during the lifetime of the activity.

When one person in a group starts an activity, other people’s devices display system UI to prompt them to join that activity. When each person joins, the system prepares a GroupSession object for the activity and delivers it to their app. Your app uses that session object to:

  • Prepare any required UI.
  • Start the activity, monitor its state, and respond to changes.
  • Synchronize activity-related information.

For information about how to define activities, see Defining your app’s SharePlay activities. For information about how to start activities, see Presenting SharePlay activities from your app’s UI.

Because of habits ingrained in me, by default I tend to reach for synchronous, blocking APIs when reading and writing data to and from disk. This causes problems with Swift’s cooperatively scheduled Tasks. In this post, I examine the various async-safe approaches I’ve discovered to hitting disk, and end with a general approach that I ended up using.

  • Functional programming emphasizes the use of mathematical functions and immutable data to construct software systems. This approach brings forth plenty of benefits, ranging from improved scalability and enhanced readability to streamlined debugging processes. In recent years, functional programming languages and frameworks have witnessed a surge in popularity, driven by their proven efficiency in real-world scenarios.

    1. Concurrency
    2. Enhanced readability
    3. Improved scalability
    4. Easier debugging
    5. Efficient parallel programming
    6. Testability
    7. Modularity
    8. Easier to reason about
    9. Transparancy

A home for makers, musicians, artists and DIY content creators

Before and after every await, and at the beginning of every Task, add a Task.sleep (I use 100ms)

MobileCode (previously medc) is an editor for C. It was written for 📱phones and adapted to 🖥desktop.

It features:

  • individual line wrapping, prettified
  • hierarchical collapsing based on {} and empty lines
    • 📱swipe control
  • code generation via shell script comments

etc: multicursor, regex search, regex replace, undo, select, line select, cut/copy/paste

Swift 5.9 (WWDC23) introduced Macros to make your codebase more expressive and easier to read. In this article, I'll go over why swift macros exist, how they work and how you can easily set up one of your own.

This guide includes: >

  • An overview of the Instruction Set Architecture (ISA) along with a method for detecting the existence of ISA features
  • Detailed description of the Advanced SIMD and Floating Point (FP) instructions
  • A discussion of intrinsic functions for utilizing specific instructions in high-level languages
  • An overview of CPU and cache topologies with recommendations for effective utilization of asymmetric multiprocessing
  • A high-level overview of CPU microarchitecture with sizes of key CPU structures and instruction latency and bandwidth tables
  • A discussion of recommended instruction sequences for various actions and algorithms
  • Lists of performance-monitoring events and metrics to measure key CPU performance behavior
>

SE-411: Isolated default value expressions

In my first post in this series, I said that Swift 5.10 can correctly find all possible sources of data races. But, I kind of lied! It turns out there is actually a pretty significant isolation hole in that version. But it gets a little more complicated, which I’ll get to.

>

If you’ve read my first post about Spatial Video, the second about Encoding Spatial Video, or if you’ve used my command-line tool, you may recall a mention of Apple’s mysterious “fisheye” projection format. Mysterious because they’ve documented a CMProjectionType.fisheye enumeration with no elaboration, they stream their immersive Apple TV+ videos in this format, yet they’ve provided no method to produce or playback third-party content using this projection type.

Additionally, the format is undocumented, they haven’t responded to an open question on the Apple Discussion Forums asking for more detail, and they didn’t cover it in their WWDC23 sessions. As someone who has experience in this area – and a relentless curiosity – I’ve spent time digging-in to Apple’s fisheye projection format, and this post shares what I’ve learned.

The defacto app for controlling monitors

Swift 5 updates have been slowly building up to the release of Swift 6. Some of the major updates have been the addition of async/await (concurrency) and existentials. If you use any of these features there will be some significant changes that will require some refactoring. Continue reading to learn how to prepare your projects and packages before the release of Swift 6 so you can also take advantage of new features (such as Swift 5.10's full data isolation) and have a smooth easy transition without any disruptive refactoring.

  • Good timing. I am waiting on some backend changes to finish for a new set of features (coming soon). Also, WWDC is going to be here before we know it, and that may influence my plans. It's usually a good time in the months leading up to WWDC to address any technical debt in order to prepare for potential new APIs. >
  • Concerns about iCloud. While I have no plans for Foodnoms to stop using iCloud, in the past year I've had more concerns about the app's reliance on it. Too much of the Foodnoms codebase directly depends on CloudKit. This has started to feel more like a liability, in the case one day I wish to use another backend syncing service.
  • Sharing code with another app. For the past four months or so, I've been working on another app. This app was able to share a lot of code with Foodnoms. This was done via a shared Swift package: the monolith, "CoreFoodNoms". While I was able to share a lot of code successfully, there were some global side-effects and assumptions that were tricky to workaround. (Note: I have decided to pause work on this app for the time being.)
  • Troubles with SwiftUI previews and compile times. SwiftUI previews for Foodnoms has always been unusable. This was mostly due to the incredibly slow compile times. I had heard that using SwiftUI previews in a smaller build target with fewer dependencies can help with this. However, this didn't work for me. The problem is that a lot of my SwiftUI code depends on core models, such as 'Food' and 'Recipe'. The thing is, these models were not 100% pure. Some of them referenced global singletons that required some sort of static/global initialization. As a result, SwiftUI previews of these views in smaller Swift packages would immediately crash, due to those singletons not being properly initialized.
>

macOS includes a variety of video and audio features that you can use in FaceTime and many other videoconferencing apps.

Reactions fill your video frame with a 3D effect expressing how you feel. To show a reaction, make the appropriate hand gesture in view of the camera and away from your face. Hold the gesture until you see the effect.

To turn this feature on or off, select Reactions in the Video menu , which appears in the menu bar when a video call is in progress. To show a reaction without using a hand gesture, click the arrow next to Reactions in the menu, then click a reaction button in the submenu.

  • With Swift 5.10, the compiler can correctly find all possible sources of data races. But, there are still quite a few sharp edges and usability issues with concurrency. Swift 6 is going to come with many language changes that will help. In fact, there are currently 13 evolution proposals and 4 pitches that are either directly or indirectly related to concurrency. That’s a lot!

    The thing is, I often find it quite challenging to read these proposals. It can be really difficult for me to go from the abstract language changes to how them will impact concrete problems I’ve faced. Honestly, sometimes I don’t even fully get the language changes! But, I’m not going to let that stop me 😬

    So, I’m going to make an attempt to cover all of the accepted evolution proposals. I’m not going to go too deep. Just a little introduction to the problem and a few examples to highlights the syntax changes. Of course, I’ll also throw in a little commentary. Each of these proposals probably deserves its own in-depth post. But, I’m getting tired just thinking about that.

Before continuing, let's take a moment to consider the cost of convenience in Xcode.

Designing a code editor that the spectrum from small to large-scale projects can use is a challenging task. Many tools approach the problem by layering their solution and providing extensibility. The bottom-most layer is very low-level and close to the underlying build system, and the top-most layer is a high-level abstraction that's convenient to use but less flexible. By doing so, they make the simple things easy, and everything else possible.

However, Apple decided to take a different approach with Xcode. The reason is unknown, but it's likely that optimizing for the challenges of large-scale projects has never been their goal. They overinvested in convenience for small projects, provided little flexibility, and strongly coupled the tools with the underlying build system. To achieve the convenience, they provide sensible defaults, which you can easily replace, and added a lot of implicit build-time-resolved behaviors that are the culprit of many issues at scale.

Universal Links help people access your content, whether or not they have your app installed. Get the details on the latest updates for the Universal Links API, including support for Apple Watch and SwiftUI. Learn how you can reduce the size and complexity of your app-site-association file with enhanced pattern matching features like wildcards, substitution variables, and Unicode support. And discover how cached associated domains data will improve the initial launch experience for people using your app.

Enhance WASM is bringing server side rendered web components to everyone. Author your components in friendly, standards based syntax. Reuse them across multiple languages, frameworks, and servers. Upgrade them using familiar client side code when needed.

Your path to resilient, cross platform interfaces begins here.

files-to-prompt is a new tool I built to help me pipe several files at once into prompts to LLMs such as Claude and GPT-4.

You definitely want to enable the DisableOutwardActorInference upcoming feature flag!

This has come up several times on the forums, but I’ve never written it up in a standard place, so here it is: There are only three ways to get run-time polymorphism in Swift. Well, three and a half.

What do I mean by run-time polymorphism? I mean a function/method call (or variable or subscript access) that will (potentially) run different code each time the call happens. This is by contrast with many, even most other function calls: when you call Array’s append, it’s always the same method that gets called.

So, what are the three, sorry, three and a half ways to get this behavior?

>

Global variables allow you to access shared instances from anywhere in your codebase. With strict concurrency, we must ensure access to the global state becomes concurrency-safe by actor isolation or Sendable conformance. In exceptional cases, we can opt out by marking a global variable as nonisolated unsafe.

This directory contains an Xcode project that can be used for rapidly iterating on refactorings built with the SwiftRefactor library.

Create video content for visionOS by converting an existing 3D HEVC file to a multiview HEVC format.

In visionOS, 3D video uses the Multiview High Efficiency Video Encoding (MV-HEVC) format, supported by MPEG4 and QuickTime. Unlike other 3D media, MV-HEVC stores a single track containing multiple layers for the video, where the track and layers share a frame size. This track frame size is different from other 3D video types, such as side-by-side video. Side-by-side videos use a single track, and place the left and right eye images next to each other as part of a single video frame.

To convert side-by-side video to MV-HEVC, you load the source video, extract each frame, and then split the frame horizontally. Then copy the left and right sides of the split frame into the left eye and right eye layers, writing a frame containing both layers to the output.

This sample app demonstrates the process for converting side-by-side video files to MV-HEVC, encoding the output as a QuickTime file. The output is placed in the same directory as the input file, with _MVHEVC appended to the original filename.

You can verify this sample’s MV-HEVC output by opening it with the sample project from Reading multiview 3D video files.

For the full details of the MV-HEVC format, see Apple HEVC Stereo Video — Interoperability Profile (PDF) and ISO Base Media File Format and Apple HEVC Stereo Video (PDF).

When you take off Apple Vision Pro (without disconnecting the battery or shutting it down), it turns off the displays to save power, locks for security, and goes to sleep. You can quickly wake and unlock Apple Vision Pro when you want to use it again.

If you disconnect the battery or shut down Apple Vision Pro, you’ll need to turn it on again before you can use it. See Complete setup.

Provide suggestions to people searching for content in your app.

You can suggest query text during a search operation by providing a collection of search suggestion views. Because suggestion views are not limited to plain text, you must also provide the search string that each suggestion view represents. You can also provide suggestions for tokens, if your search interface includes them. SwiftUI presents the suggestions in a list below the search field.

For both text and tokens, you manage the list of suggestions, so you have complete flexibility to decide what to suggest. For example, you can:

  • Offer a static list of suggestions.
  • Remember previous searches and offer the most recent or most common ones.
  • Update the list of suggestions in real time based on the current search text.
  • Employ some combination of these and other strategies, possibly changing over time.
>

Apple asks customers to help improve iOS by occasionally providing analytics, diagnostic, and usage information. Apple collects this information anonymously.

Gather crash reports and device logs from the App Store, TestFlight, and directly from devices.

After your app is distributed to customers, learn ways to improve it by collecting crash reports and diagnostic logs. If a customer reports an issue with your app, use the Crashes organizer in Xcode to get a report about the issue, as described in How are reports created? If the Crashes organizer doesn’t contain the diagnostic information you need or is unavailable to you, the customer can collect logs from their device and share them directly with you to resolve the issue. Once you have a crash report, you may need to add identifiable symbol information to the crash report—see Adding identifiable symbol names to a crash report for more information. For issues that aren’t crashes, inspect the operating system’s console log to find important information for diagnosing the issue’s source.

Crossing the language boundary between Haskell and Swift. This is the second part of an in-depth guide into developing native applications using Haskell with Swift.

This is the second installment of the in-depth series of blog-posts on developing native macOS and iOS applications using both Haskell and Swift/SwiftUI. This post covers how to call (non-trivial) Haskell functions from Swift by using a foreign function calling-convention strategy similar to that described by Calling Purgatory from Heaven: Binding to Rust in Haskell that requires argument and result marshaling.

You may find the other blog posts in this series interesting:

  1. Creating a macOS app with Haskell and Swift

The series of blog posts is further accompanied by a github repository where each commit matches a step of this tutorial. If in doubt regarding any step, check the matching commit to make it clearer.

This write-up has been cross-posted to Well-Typed’s Blog.

Spatial is a free macOS command-line tool to process MV-HEVC video files (currently produced by iPhone 15 Pro and Apple Vision Pro). It exports from MV-HEVC files to common stereoscopic formats (like over/under, side-by-side, and separate left- and right-eye videos) that can be used with standard stereo/3D players and video editors. It can also make MV-HEVC video from the same stereoscopic formats to be played on Apple Vision Pro and Meta Quest.

For a deeper dive into Apple’s spatial and immersive formats, read my post about Spatial Video.

March

I started working with language models five years ago when I led the team that created CodeSearchNet, a precursor to GitHub CoPilot. Since then, I’ve seen many successful and unsuccessful approaches to building LLM products. I’ve found that unsuccessful products almost always share a common root cause: a failure to create robust evaluation systems.

I’m currently an independent consultant who helps companies build domain-specific AI products. I hope companies can save thousands of dollars in consulting fees by reading this post carefully. As much as I love making money, I hate seeing folks make the same mistake repeatedly.

This post outlines my thoughts on building evaluation systems for LLMs-powered AI products.

Iterating Quickly == Success

Like software engineering, success with AI hinges on how fast you can iterate. You must have processes and tools for:

  1. Evaluating quality (ex: tests).
  2. Debugging issues (ex: logging & inspecting data).
  3. Changing the behavior or the system (prompt eng, fine-tuning, writing code)

Many people focus exclusively on #3 above, which prevents them from improving their LLM products beyond a demo.1 Doing all three activities well creates a virtuous cycle differentiating great from mediocre AI products (see the diagram below for a visualization of this cycle).

If you streamline your evaluation process, all other activities become easy. This is very similar to how tests in software engineering pay massive dividends in the long term despite requiring up-front investment.

To ground this post in a real-world situation, I’ll walk through a case study in which we built a system for rapid improvement. I’ll primarily focus on evaluation as that is the most critical component.

A high-level introduction to distributed actor systems.

Distributed actors extend Swift’s “local only” concept of actor types to the world of distributed systems.

In order to build distributed systems successfully you will need to get into the right mindset.

While distributed actors make calling methods (i.e. sending messages to them) on potentially remote actors simple and safe, thanks to compile time guarantees about the serializability of arguments to be delivered to the remote peer. It is important to stay in the mindset of “what should happen if this actor were indeed remote…?”

Distribution comes with the added complexity of partial failure of systems. Messages may be dropped as networks face issues, or a remote call may be delivered (and processed!) successfully, while only the reply to it may not have been able to be delivered back to the caller of a distributed function. In most, if not all, such situations the distributed actor cluster will signal problems by throwing transport errors from the remote function invocation.

In this section we will try to guide you towards “thinking in actors,” but perhaps it’s also best to first realize that: “you probably already know actors!” As any time you implement some form of identity that is given tasks that it should work on, most likely using some concurrent queue or other synchronization mechanism, you are probably inventing some form of actor-like structures there yourself!

In Swift 5.5, the Swift Package Manager adds support for package collections — bite size curated lists of packages that make it easy to discover, share and adopt packages.

At the time of this article’s publication, Swift 5.5 is available as a preview both from [Swift.org](http://swift.org) and in the Xcode 13 seeds. Swift 5.5 will be released officially later this year.

The goal of package collections is to improve two key aspects of the package ecosystem:

  1. Discovering great packages
  2. Deciding which package is the best fit for a particular engineering task

Package collections embrace and promote the concept of curation. Instead of browsing through long lists of web search results, package collections narrow the selection to a small list of packages from curators you trust. Package collections serve many use cases: For example, we envision communities of Swift developers publishing collections that reflect great packages produced and used by those communities to tackle everyday tasks. Educators can also use package collections to aggregate a set of packages to go along with course materials. Enterprises can use package collections to narrow the decision space for their internal engineering teams, focusing on a trusted set of vetted packages.

Choose a product or search below to view related documents and available downloads.

Many of Apple’s own visionOS apps, like Music, Safari, and Apple TV, have a handy search bar front and center on the window so you can easily search through your content. Oddly, as of visionOS 1.1, replicating this visually as a developer using SwiftUI or UIKit is not particularly easy due to lack of a direct API, but it’s still totally possible, so let’s explore how.

With Swift, anyone can code like the pros. Whether you’re working on a project for school, earning an industry-recognized credential, or just looking to build your skills, Swift makes it easy to create great apps for all Apple platforms — no experience necessary.

The main Swift repository contains the source code for the Swift compiler and standard library, as well as related components such as SourceKit (for IDE integration), the Swift regression test suite, and implementation-level documentation.

The Swift driver repository contains a new implementation of the Swift compiler’s “driver”, which aims to be a more extensible, maintainable, and robust drop-in replacement for the existing compiler driver.

Compiler Architecture

As a whole, the Swift compiler is principally responsible for translating Swift source code into efficient, executable machine code. However, the Swift compiler front-end also supports a number of other tools, including IDE integration with syntax coloring, code completion, and other conveniences. This document provides a high-level description of the major components of the Swift compiler:

  • Parsing: The parser is a simple, recursive-descent parser (implemented in lib/Parse) with an integrated, hand-coded lexer. The parser is responsible for generating an Abstract Syntax Tree (AST) without any semantic or type information, and emits warnings or errors for grammatical problems with the input source.
  • Semantic analysis: Semantic analysis (implemented in lib/Sema) is responsible for taking the parsed AST and transforming it into a well-formed, fully-type-checked form of the AST, emitting warnings or errors for semantic problems in the source code. Semantic analysis includes type inference and, on success, indicates that it is safe to generate code from the resulting, type-checked AST.
  • Clang importer: The Clang importer (implemented in lib/ClangImporter) imports Clang modules and maps the C or Objective-C APIs they export into their corresponding Swift APIs. The resulting imported ASTs can be referred to by semantic analysis.
  • SIL generation: The Swift Intermediate Language (SIL) is a high-level, Swift-specific intermediate language suitable for further analysis and optimization of Swift code. The SIL generation phase (implemented in lib/SILGen) lowers the type-checked AST into so-called “raw” SIL. The design of SIL is described in docs/SIL.rst.
  • SIL guaranteed transformations: The SIL guaranteed transformations (implemented in lib/SILOptimizer/Mandatory) perform additional dataflow diagnostics that affect the correctness of a program (such as a use of uninitialized variables). The end result of these transformations is “canonical” SIL.
  • SIL optimizations: The SIL optimizations (implemented in lib/SILOptimizer/Analysis, lib/SILOptimizer/ARC, lib/SILOptimizer/LoopTransforms, and lib/SILOptimizer/Transforms) perform additional high-level, Swift-specific optimizations to the program, including (for example) Automatic Reference Counting optimizations, devirtualization, and generic specialization.
  • LLVM IR generation: IR generation (implemented in lib/IRGen) lowers SIL to LLVM IR, at which point LLVM can continue to optimize it and generate machine code.
>

Add conditional compilation markers around code that requires a particular family of devices or minimum operating system version to run.

When you invest time developing a new feature for an app, you want to get the maximum value out of the code you write. Creating a new project to support a new platform or operating system version adds unnecessary work, especially if most of your code stays the same. The best solution is to maintain one version of your app that runs on multiple platforms and operating system versions. To achieve this, compile code conditionally for the target platform, or use availability condition checks to run code based on operating system version.

Skip’s Swift to Kotlin language transpiler is able to convert a large subset of the Swift language into Kotlin. The transpiler has the following goals:

  1. Avoid generating buggy code. We would rather give you an immediate error or generate Kotlin that fails to compile altogether than to generate Kotlin that compiles but behaves differently than your Swift source.
  2. Allow you to write natural Swift. Swift is a sprawling language; we attempt to supports its most common and useful features so that you can code with confidence.
  3. Generate idiomatic Kotlin. Where possible, we strive to generate clean and idiomatic Kotlin from your Swift source.

These goals form a hierarchy. For example, if generating more idiomatic Kotlin would run the risk of introducing subtle behavioral differences from the source Swift, Skip will always opt for a less idiomatic but bug-free transpilation.

3D DOM viewer, copy-paste this into your console to visualise the DOM topographically.

XcodePilot is a powerful development tool designed to provide integrated features and tools for Apple platform developers, aiming to enhance development efficiency and simplify the development process. XcodePilot integrates multiple tools, including Copilot, Xcode and Runtime management, simulator management, cache cleaning, and keyboard shortcuts customization. We continuously introduce new features to meet the needs of developers.

Add conditional compilation markers around code that requires a particular family of devices or minimum operating system version to run.

When you invest time developing a new feature for an app, you want to get the maximum value out of the code you write. Creating a new project to support a new platform or operating system version adds unnecessary work, especially if most of your code stays the same. The best solution is to maintain one version of your app that runs on multiple platforms and operating system versions. To achieve this, compile code conditionally for the target platform, or use availability condition checks to run code based on operating system version.

Binary Vector Search: The 30x Memory Reduction Revolution with Preserved Accuracy

Within the field of vector search, an intriguing development has arisen: binary vector search. This approach shows promise in tackling the long-standing issue of memory consumption by achieving a remarkable 30x reduction. However, a critical aspect that sparks debate is its effect on accuracy.

We believe that using binary vector search, along with specific optimization techniques, can maintain similar accuracy. To provide clarity on this subject, we showcase a series of experiments that will demonstrate the effects and implications of this approach.

By utilizing adaptive retrieval techniques, binary vectors can maintain a high level of accuracy while significantly reducing memory usage by 30 times. We have presented benchmark metrics in a table to showcase the results. It is important to note that these outcomes are specific to the openai text-embedding-3-large model, which possesses this particular property.

Learn how actors and sendable prevent race conditions in your concurrent code.

Skip brings Swift app development to Android. It is a tool that enables developers to use a single modern programming language (Swift) and first-class development environment (Xcode) to build genuinely native apps for both iOS and Android.j

To use Swift concurrency successfully, you have learn to think in terms of isolation. It is the foundational mechanism the compiler uses to reason about and prevent data races. All variables and functions have it. The thing is, isolation is really different from every other synchronization mechanism I’ve used before. Now that I have more practice, I find it often feels really natural. But getting to that point took real time! And, boy, did I make some spectacular mistakes along the way.

Developing intuition around how isolation works is essential, but it will be less work than you might think!

I've found the best way to understand this feature is to play around with it. But that has been difficult until recently because not all the necessary pieces were available in a nightly toolchain, even under an experimental flag. In particular the ability to create pointers and optionals of non-copyable types. But that changed when @lorentey landed support support for these last week. At the same time, some of the other proposals that are coming out are a little more obscure than the basic generics support, and so haven't had as much discussion. These are also much easier to understand once you actually try and use them, and see the impact of not having them.

To help tie all these pieces together, I wrote up some code that uses all these proposals in order to build a basic singly-linked list type. This code is similar to the code you can find in chapter 2 of @Gankra's excellent tutorial about linked lists in Rust, which I encourage you to read to get a better feel for how they handle ownership.

_ChatGPT CodeInterpreter example

PL/Swift allows you to write custom SQL functions and types for the PostgreSQL database server in the Swift programming language.

Bringing Swift to the Backend of the Backend’s Backend!

Apple doesn't like to make things easy for us, do they?

They created a wonderful first-party package ecosystem in Swift Package Manager, but didn't put much work into explaining how to make the most of it.

It's easy enough to package a dynamic framework, however you need to jump through many undocumented hoops to properly deduplicate assets and make your app lightweight.

But when you do get it working, you can achieve awesome results like shedding 58% from your app binary size. Take the time to work through the sample project, understand these clandestine techniques, and apply similar improvements to your own apps!

Develop device drivers that run in user space.

The DriverKit framework defines the fundamental behaviors for device drivers in macOS and iPadOS. The C++ classes of this framework define your driver’s basic structure, and provide support for handling events and allocating memory. This framework also supports appropriate types for examining the numbers, strings, and other types of data in your driver’s I/O registry entry. Other frameworks, such as USBDriverKit, HIDDriverKit, NetworkingDriverKit, PCIDriverKit, SerialDriverKit, and AudioDriverKit, provide the specific behaviors you need to support different types of devices.

The drivers you build with DriverKit run in user space, rather than as kernel extensions, which improves system stability and security. You create your driver as an app extension and deliver it inside your existing app.

In macOS, use the System Extensions framework to install and upgrade your driver. In iPadOS, the system automatically discovers and upgrades drivers along with their host apps.

Install and manage user space code that extends the capabilities of macOS.

Extend the capabilities of macOS by installing and managing system extensions—drivers and other low-level code—in user space rather than in the kernel. By running in user space, system extensions can’t compromise the security or stability of macOS. The system grants these extensions a high level of privilege, so they can perform the kinds of tasks previously reserved for kernel extensions (KEXTs).

You use frameworks like DriverKit, Endpoint Security, and Network Extension to write your system extension, and you package the extension in your app bundle. At runtime, use the SystemExtensions framework to install or update the extension on the user’s system. Once installed, an extension remains available for all users on the system. Users can disable the extension by deleting the app, which deletes the extension.

An extension other apps use to access files and folders managed by your app and synced with a remote storage.

If your app focuses on providing and syncing user documents from remote storage, you can implement a File Provider extension to give users access to those documents when they’re using other apps. If you just need to share local documents, see Share files locally below. The framework has two different starting points for building your File Provider extension.

NSFileProviderReplicatedExtension — The system manages the content accessed through the File Provider extension. Available in macOS 11+ and iOS 16+.

NSFileProviderExtension — The extension hosts and manages the files accessed through the File Provider extension. Available in iOS 11+.

The replicated extension takes responsibility for monitoring and managing the local copies of your documents. The file provider focuses on syncing data between the local copy and the remote storage—uploading any local changes and downloading any remote changes. For more information, see Replicated File Provider extension.

The nonreplicated extension manages a local copy of the extension’s content, including creating and managing placeholders for remote files. It also syncs the content with your remote storage. For more information, see Nonreplicated File Provider extension.

Create a DriverKit extension to support your Thunderbolt device’s custom features.

All hardware devices require special software — called drivers — to communicate with macOS. Thunderbolt devices communicate using the PCIe interface, and so they use PCIe drivers with extra support for Thunderbolt features.

If your Thunderbolt device uses popular PCIe Ethernet controllers from Intel, Broadcom, or Aquantia, or if your device communicates using industry-standard protocols such as XHCI, AHCI, NVMe, or FireWire, you don’t need to create a custom driver. Apple supplies built-in drivers that already support these chip sets and interfaces. The only time you need to create a custom driver is when your hardware supports proprietary features. In macOS 11 and later, build any custom drivers as DriverKit extensions using the PCIDriverKit framework.

Get notifications when the contents of a directory hierarchy change.

The file system events API provides a way for your application to ask for notification when the contents of a directory hierarchy are modified. For example, your application can use this to quickly detect when the user modifies a file within a project bundle using another application.

It also provides a lightweight way to determine whether the contents of a directory hierarchy have changed since your application last examined them. For example, a backup application can use this to determine what files have changed since a given time stamp or a given event ID.

Prevent data loss and app crashes by interacting with the file system in a coordinated, asynchronous manner and by avoiding unnecessary disk I/O.

A device’s file system is a shared resource available to all running processes. If multiple processes (or multiple threads in the same process) attempt to act on the same file simultaneously, data corruption or loss may occur, and your app may even crash.

To establish safe and efficient file access, avoid performing immediate file I/O on the app’s main thread. Use NSFileCoordinator to choreograph file access, opt for the I/O-free variants of file-related APIs, and implement the prefetching mechanisms of UICollectionView and UITableView to efficiently prepare file-related data for display.

Add more protection to your HomeKit accessories by controlling which services and devices they communicate with on your home Wi-Fi network and over the internet.

Use universal links to link directly to content within your app and share data securely.

Overview

You can connect to content deep inside your app with universal links. Users open your app in a specified context, allowing them to accomplish their goals efficiently.

When users tap or click a universal link, the system redirects the link directly to your app without routing through Safari or your website. In addition, because universal links are standard HTTP or HTTPS links, one URL works for both your website and your app. If the user has not installed your app, the system opens the URL in Safari, allowing your website to handle it.

When users install your app, the system checks a file stored on your web server to verify that your website allows your app to open URLs on its behalf. Only you can store this file on your server, securing the association of your website and your app.

Take the following steps to support universal links:

  1. Create a two-way association between your app and your website and specify the URLs that your app handles, as described in Supporting associated domains.
  2. Update your app delegate to respond to the user activity object the system provides when a universal link routes to your app, as described in Supporting universal links in your app.

With universal links, users open your app when they click links to your website within Safari and WKWebView, and when they click links that result in a call to:

>

Experimental support for generic noncopyable types in the #swift standard library is now available in the nightly toolchain.

Here's a simple demonstration of adoption of this feature on the Swift Playdate example project. Switching the Sprite type from an enum+class box to a simpler non-copyable struct drops binary size from 7k to 6k on the SwiftBreak game.

While I was researching how to do level-order traversals of a binary tree in Haskell, I came across a library called tree-traversals which introduced a fancy Applicative instance called Phases. It took me a lot of effort to understand how it works. Although I still have some unresolved issues, I want to share my journey.

Note: I was planning to post this article on Reddit. But I gave up because it was too long so here might be a better place.

Note: This article is written in a beginner-friendly way. Experts may find it tedious.

Note: The author is not a native English speaker and is glad to accept corrections, refinements and suggestions.

Last week, I went on an adventure through the electromagnetic spectrum!

It’s like an invisible world that always surrounds us, and allows us to do many amazing things: It’s how radio and TV are transmitted, it’s how we communicate using Wi-Fi or our phones. And there are many more things to discover there, from all over the world.

In this post, I’ll show you fifty things you can find there — all you need is this simple USB dongle and an antenna kit!

Use mergeable dynamic libraries to get app launch times similar to static linking in release builds, without losing dynamically linked build times in debug builds.

In Xcode 15 or later, you can include symbols from a separate, mergeable dynamic library for macOS and iOS app and framework targets. Mergeable dynamic libraries include extra metadata so that Xcode can merge the library into another binary, similar to linking a static library with -all_load. When you enable automatic merging, Xcode enables build settings that make app launching fast and keep debugging and development build times fast.

Make your app more responsive by examining the event-handling and rendering loop.

Human perception is adept at identifying motion and linking cause to effect through sequential actions. This is important for graphical user interfaces because they rely on making the user believe a certain interaction with a device causes a specific effect, and that the objects onscreen behave sufficiently realistically. For example, a button needs to highlight when a person taps or clicks it, and when someone drags an object across the screen, it needs to follow the mouse or finger.

There are two ways this illusion can break down:

  • The time between user input and the screen update is too long, so the app’s UI doesn’t seem like it’s responding instantaneously anymore. A noticeable delay between user input and the corresponding screen update is called a hang. For more information, see Understanding hangs in your app.
  • The motion onscreen isn’t fluid like it would be in the real world. An example is when the screen seems to get stuck and then jumps ahead during scrolling or during an animation. This is called a hitch.

This article covers different types of user interactions and how the event-handling and rendering loop processes events to handle them. This foundational knowledge helps you understand what causes hangs and hitches, how the two are similar, and what differentiates them.

Managing Dependencies in the Age of SwiftUI

Dependency Injection (or in short: DI) is one of the most fundamental parts of structuring any kind of software application. If you do DI right, it gets a lot easier to change and extend your application in a safe manner. But if you get it wrong, it can become increasingly more difficult to ship your features in a timely, correct and safe way.

Apple notoriously has been quite unopinionated about Dependency Injection in its development frameworks until recently, when it introduced EnvironmentObject for SwiftUI.

In this post, let’s see how we can use GitHub Actions to automate building the DocC of a Swift Package with GitHub Actions.

Learn how you can optimize your app with the Swift Concurrency template in Instruments. We'll discuss common performance issues and show you how to use Instruments to find and resolve these problems. Learn how you can keep your UI responsive, maximize parallel performance, and analyze Swift concurrency activity within your app. To get the most out of this session, we recommend familiarity with Swift concurrency (including tasks and actors).

View power and performance metrics for apps you distribute through the App Store.

Use the Xcode Organizer to view anonymized performance data from your app’s users, including launch times, memory usage, UI responsiveness, and impact on the battery. Use the data to tune the next version of your app and catch regressions that make it into a specific version of your app.

In Xcode, choose Window > Organizer to open the Organizer window, and then select the desired metric or report. In some cases, the pane shows “Insufficient usage data available” because there may not be enough anonymized data reported from participating user devices. When this happens, try checking back in a few days.

Determine the cause for delays in user interactions by examining the main thread and the main run loop.

A discrete user interaction occurs when a person performs a single well-contained interaction and the screen then updates. An example is when someone presses a key on the keyboard and the corresponding letter then appears onscreen. Although the software running on the device needs time to process the incoming user input event and compute the corresponding screen update, it’s usually so quick that a human can’t perceive it and the screen update seems instantaneous.

When the delay in handling a discrete user interaction becomes noticeable, that period of unresponsiveness is known as a hang. Other common terms for this behavior are freeze because the app stops updating, and spin based on the spinning wait cursor that appears in macOS when an app is unresponsive.

Although discrete interactions are less sensitive to delays than continuous interactions, it doesn’t take long for a person to perceive a gap between an action and its reaction as a pause, which breaks their immersive experience. A delay of less than 100 ms in a discrete user interaction is rarely noticeable, but even a few hundred milliseconds can make people feel that an app is unresponsive.

A hang is almost always the result of long-running work on the main thread. This article explains what causes a hang, why the main thread and the main run loop are essential to understanding hangs, and how various tools can detect hangs on Apple devices.

Create a more responsive experience with your app by minimizing time spent in startup. A user’s first experience with an app is the wait while it launches. The OS indicates the app is launching with a splash screen on iOS and an icon bouncing in Dock on macOS. The app needs to be ready to help the user with a task as soon as possible. An app that takes too long to launch may frustrate the user, and on iOS, the watchdog will terminate it if it takes too long. Typically, users launch an app many times in a day if it’s part of their regular workflow, and a long launch time causes delays in performing a task.

When the user taps an app’s icon on their Home screen, iOS prepares the app for launch before handing control over to the app process. The app then runs code to get ready to draw its UI to the screen. Even after the app’s UI is visible, the app may still be preparing content or replacing an interstitial interface (for example, a loading spinner) with the final controls. Each of these steps contributes to the total perceived launch time of the app, and you can take steps to reduce their duration.

Understand app activations

An activation happens when a user clicks on your icon or otherwise goes back to your app.

On iOS, an activation can either be a launch or a resume. A launch is when the process needs to start, and a resume is when your app already had a process alive, even if suspended. A resume is generally much faster, and the work to optimize a launch and resume differs.

On macOS, the system will not terminate your process as part of normal use. An activation may require the system to bring in memory from the compressor, swap, and re-render.

Understanding cold and warm launch

Your app activation varies significantly depending on previous actions on the device.

For example, on iOS, if you swipe back to the home screen and immediately re-enter the app, that is the fastest activation possible. It’s also likely to be a resume. When the system determines that a launch is required, it is commonly referred to as a “warm launch.”

Conversely, if a user just played a memory-intensive game, and they then re-enter your app, for example, it may be significantly slower than your average activation. On iOS, your app typically was evicted from memory to allow the foreground application more memory. Frameworks and daemons that your app depends on to launch might also require re-launching and paging in from disk. This scenario, or a launch immediately after boot, is often referred to as a “cold launch.”

Think of warm and cold launches as a spectrum. In real use, your users will experience a range of performance based on the state of the device. This spectrum is why testing in a variety of conditions is essential to predicting your real world performance.

I’m going to share some best practices when using @StateObject property wrappers, things learned the hard way, via some bugs that were difficult to diagnose and nearly impossible to notice during code review—unless one knows what to look for.

The short version is this: if you have to explicitly initialize a @StateObject, pay close attention to the fact that the property wrapper’s initialization parameter is an escaping closure called thunk, not an object called wrappedValue. Do all the wrapped object initialization and prep inside the closure, or else you’ll undermine the performance benefits that likely motivated you to use @StateObject in the first place.

Ezno is an experimental compiler I have been working on and off for a while. In short, it is a JavaScript compiler featuring checking, correctness and performance for building full-stack (rendering on the client and server) websites.

Using SharePlay and CarPlay, you and your passengers can all control the music that’s playing in the car.

Passengers can join a SharePlay session in two ways: by tapping a notification on their iPhone or by scanning a QR code, either on the CarPlay Now Playing screen or on the Now Playing screen of another passenger’s iPhone.

Create a user experience that feels responsive by removing hangs and hitches from your app.

An app that responds instantly to users’ interactions gives an impression of supporting their workflow. When the app responds to gestures and taps in real time, it creates an experience for users that they’re directly manipulating the objects on the screen. Apps with a noticeable delay in user interaction (a hang) or movement on screen that appears to jump (a hitch), shatter that illusion. This leaves the user wondering whether the app is working correctly. To avoid hangs and hitches, keep the following rough thresholds in mind as you develop and test your app.

< 100 ms — Synchronous main thread work in response to a discrete user interaction.

< 1 display refresh interval (8 or 17ms) — Main thread work and work to handle continuous user interaction.

Work performed on the main thread influences both the delay between an incoming user event and the corresponding screen update as well as the maximum frequency of screen updates.

If a delay in discrete user interaction becomes longer than 100 ms, it starts to become noticeable and causes a hang. Other stages of the event handling and rendering pipeline contribute to the overall delay. Assume that less than half that time is available for your app’s main thread to do its work. A shorter delay is rarely noticeable.

For fluid, uninterrupted motion, a new frame needs to be ready whenever the screen updates. On Apple devices, this can be as often as 120 times per second, or every 8.3 ms. Another common display refresh rate for Apple devices is 60Hz, so one update every 16.7ms. Depending on system conditions and other work that your app performs, you might not have the full display refresh interval to prepare your next screen update. If the work that your app needs to perform on the main thread to update the screen is less than 5 ms, the update is usually ready in time. If it takes longer, you need to take a closer look at the specific devices you’re targeting and the display refresh rate your app needs to support. Look at the section on hitches below for tools and guidelines to determine whether you are meeting the appropriate responsiveness thresholds.

Similarly, avoid scheduling work that does not have to execute on the main thread on the main thread, not even asynchronously, e.g. via dispatch_async or awaiting the result of a function call on the main actor. As you have no control over when exactly the main thread processes your work or what the user might be doing at the time, it might come in in the middle of a continuous user interaction and cause a hitch.

import SwiftUI
import AsyncAlgorithms

struct AsyncChanges<V>: ViewModifier where V : Equatable, V: Sendable {
typealias Element = (oldValue: V, newValue: V)
> typealias Action = (AsyncStream<Element>) async -> Void
> @State private var streamPair = AsyncStream<Element>.makeStream()
> private let action: Action
> private let value: V
> init(of value: V, initial: Bool, action: @escaping Action) {
>     self.action = action
>     self.value = value
> } 
>
>   func body(content: Content) -> some View {
>       content
>           .onChange(of: value, initial: true) { oldValue, newValue in
>             streamPair.continuation.yield((oldValue, newValue))
>         }
>         .task {
>               await action(streamPair.stream)
>         }
> }
> }
>
> extension View {
>     public func asyncChanges<V>(
>       of value: V,
>       initial: Bool = false,
>       action: @escaping (AsyncStream<(oldValue: V, newValue: V)>) async -> Void
>     ) -> some View where V: Equatable, V: Sendable {
>       modifier(AsyncChanges<V>(of: value, initial: initial, action: action))
>     }
> }
>
> struct ContentView: View {
>     @State private var username = ""
>
>     var body: some View {
>       TextField("Username", text: self.$username)
>             .asyncChanges(of: username) { sequence in
>                 for await value in sequence.debounce(for: .seconds(0.25)) {
>                     print("debounced value: \(value.newValue)")
>                 }
>             }
>     }
> }
> ```

An AsyncSequence that allows to be consumed several times. Returning the current state as specified in a reduce function

There are so many gosh darn syntax sites. How should I remember all their URLs?

TOPIC SITE
C Function Pointers How Do I Declare a Function Pointer in C?
Date Formatting Easy Skeezy Date Formatting for Swift and Objective-C
Format Styles Gosh Darn Format Style!
Git Dangit, Git
Objective-C Block Syntax How Do I Declare a Block in Objective-C?
Swift Closure Syntax How Do I Declare a Closure in Swift?
Swift Multiple Trailing Closure Syntax How Do I Write Multiple Trailing Closures in Swift?
Swift if case let Syntax How Do I Write If Case Let in Swift?
SwiftPM Swift Package Manager
SwiftUI Gosh Darn SwiftUI
SwiftUI Property Wrappers SwiftUI Property Wrappers

A convenient interface to the contents of the file system, and the primary means of interacting with it.

A file manager object lets you examine the contents of the file system and make changes to it. The FileManager class provides convenient access to a shared file manager object that is suitable for most types of file-related manipulations. A file manager object is typically your primary mode of interaction with the file system. You use it to locate, create, copy, and move files and directories. You also use it to get information about a file or directory or change some of its attributes.

When specifying the location of files, you can use either NSURL or NSString objects. The use of the NSURL class is generally preferred for specifying file-system items because URLs can convert path information to a more efficient representation internally. You can also obtain a bookmark from an NSURL object, which is similar to an alias and offers a more sure way of locating the file or directory later.

If you are moving, copying, linking, or removing files or directories, you can use a delegate in conjunction with a file manager object to manage those operations. The delegate’s role is to affirm the operation and to decide whether to proceed when errors occur. In macOS 10.7 and later, the delegate must conform to the FileManagerDelegate protocol.

In iOS 5.0 and later and in macOS 10.7 and later, FileManager includes methods for managing items stored in iCloud. Files and directories tagged for cloud storage are synced to iCloud so that they can be made available to the user’s iOS devices and Macintosh computers. Changes to an item in one location are propagated to all other locations to ensure the items stay in sync.

Prevent data loss and app crashes by interacting with the file system in a coordinated, asynchronous manner and by avoiding unnecessary disk I/O.

A device’s file system is a shared resource available to all running processes. If multiple processes (or multiple threads in the same process) attempt to act on the same file simultaneously, data corruption or loss may occur, and your app may even crash.

To establish safe and efficient file access, avoid performing immediate file I/O on the app’s main thread. Use NSFileCoordinator to choreograph file access, opt for the I/O-free variants of file-related APIs, and implement the prefetching mechanisms of UICollectionView and UITableView to efficiently prepare file-related data for display.

We explore denotational interpreters: denotational semantics that produce coinductive traces of a corresponding small-step operational semantics. By parameterising our denotational interpreter over the semantic domain and then varying it, we recover dynamic semantics with different evaluation strategies as well as summary-based static analyses such as type analysis, all from the same generic interpreter. Among our contributions is the first provably adequate denotational semantics for call-by-need. The generated traces lend themselves well to describe operational properties such as evaluation cardinality, and hence to static analyses abstracting these operational properties. Since static analysis and dynamic semantics share the same generic interpreter definition, soundness proofs via abstract interpretation decompose into showing small abstraction laws about the abstract domain, thus obviating complicated ad-hoc preservation-style proof frameworks.

In this series of blog posts we’ll take a deep dive into on-device training. I’ll show how to train a customizable image classifier using k-Nearest Neighbors as well as a deep neural network.

This proposal introduces first-class differentiable programming to Swift. First-class differentiable programming includes five core additions:

  • The Differentiable protocol.
  • @differentiable function types.
  • The @differentiable declaration attribute for defining differentiable functions.
  • The @derivative and @transpose attributes for defining custom derivatives.
  • Differential operators (e.g. derivative(of:)) in the standard library.

THE FARTHEST tells the captivating tales of the people and events behind one of humanity’s greatest achievements in exploration: NASA’s Voyager mission, which celebrates its 40th anniversary this August. The twin spacecraft—each with less computing power than a cell phone—used slingshot trajectories to visit Jupiter, Saturn, Uranus and Neptune. They sent back unprecedented images and data that revolutionized our understanding of the spectacular outer planets and their many peculiar moons.

Still going strong four decades after launch, each spacecraft carries an iconic golden record with greetings, music and images from Earth—a gift for any aliens that might one day find it. Voyager 1, which left our solar system and ushered humanity into the interstellar age in 2012, is the farthest-flung object humans have ever created. A billion years from now, when our sun has flamed out and burned Earth to a cinder, the Voyagers and their golden records will still be sailing on—perhaps the only remaining evidence that humanity ever existed.

The ultimate playground for hardware programming in Swift

An example spatial/immersive video player for Apple Vision Pro

With Vision Pro, Apple has created a device that can playback spatial and immersive video recorded by iPhone 15 Pro, the Vision Pro itself, or created with my spatial command line tool (and similar tools). These videos are encoded using MV-HEVC, and each contains a Video Extended Usage box that describes how to play them back. Unfortunately, even one month after release, Apple has provided no (obvious) method to play these videos in all of their supported formats.

Out of necessity, I created a very bare-bones spatial video player to test the output of my command-line tool. It has also been used to test video samples that have been sent to me by interested parties. I've played up to 12K-per-eye (11520x5760) 360º stereo content (though at a low frame rate).

In order to avoid dependency graph nightmares, where you are unable to update or use a package due to conflicting dependency versions, we suggest being as flexible in your dependency on SwiftSyntax as possible.

This means that rather than depending on SwiftSyntax by saying you are willing to accept any minor version within a particular major version, as Xcode’s macro template does by default:

.package(
url: "<https://github.com/apple/swift-syntax>",
from: "509.0.0"
)

…you should instead accept a range of major versions like so:

.package(
url: "<https://github.com/apple/swift-syntax>",
"508.0.0"..<"510.0.0"
)

This allows people to depend on your package who are still stuck on version 508 of SwiftSyntax, while also allowing those who can target 509 to use your library.

How do you enable strict concurrency checking for all targets in a Swift Package?

@MainActor is a Swift annotation to coerce a function to always run on the main thread and to enable the compiler to verify this. How does this work? In this article, I’m going to reimplement @MainActor in a slightly simplified form for illustration purposes, mainly to show how little “magic” there is to it. The code of the real implementation in the Swift standard library is available in the Swift repository.

@MainActor relies on two Swift features, one of them unofficial: global actors and custom executors.

Recently, someone asked me a question about actor isolation. The specifics aren’t important, but I really got to thinking about it because of course they were struggling. Isolation is central to how Swift concurrency works, but it’s a totally new concept.

Despite being new, it actually uses mostly familiar mechanisms. You probably do understand a lot about how isolation works, you just don’t realize it yet.

Here’s breakdown of the concepts, in the simplest terms I could come up with.

pfl is a Python framework developed at Apple to enable researchers to run efficient simulations with privacy-preserving federated learning (FL) and disseminate the results of their research in FL. The framework is not intended to be used for third-party FL deployments but the results of the simulations can be tremendously useful in actual FL deployments. We hope that pfl will promote open research in FL and its effective dissemination. pfl provides several useful features, including the following:

  • Get started quickly trying out PFL for your use case with your existing model and data.
  • Iterate quickly with fast simulations utilizing multiple levels of distributed training (multiple processes, GPUs and machines).
  • Flexibility and expressiveness — when a researcher has a PFL idea to try, pfl has flexible APIs to express these ideas and promote their dissemination (e.g. models, algorithms, federated datasets, privacy mechanisms).
  • Fast, scalable simulations for large experiments with state-of-the-art algorithms and models.
  • Support of both PyTorch and TensorFlow. This is great for groups that use both, e.g. other large companies.
  • Unified benchmarks for datasets that has been vetted for both TensorFlow and PyTorch. Current FL benchmarks are made for one or the other.
  • Support of other models in addition to neural networks, e.g. GBDTs. Switching between types of models while keeping the remaining setup fixed is seamless.
  • xTight integration with privacy features, including common mechanisms for local and central differential privacy.

A reimplementation of the basics of MainActor. Sample code for https://oleb.net/2022/how-mainactor-works/

February

If you’ve used SwiftUI for long enough, you’ve probably noticed that the public Swift APIs it provides are really only half the story. Normally inconspicuous unless something goes exceedingly wrong, the private framework called AttributeGraph tracks almost every single aspect of your app from behind the scenes to make decisions on when things need to be updated. It would not be much of an exaggeration to suggest that this C++ library is actually what runs the show, with SwiftUI just being a thin veneer on top to draw some platform-appropriate controls and provide a stable interface to program against. True to its name, AttributeGraph provides the foundation of what a declarative UI framework needs: a graph of attributes that tracks data dependencies.

Mastering how these dependencies work is crucial to writing advanced SwiftUI code. Unfortunately, being a private implementation detail of a closed-source framework means that searching for AttributeGraph online usually only yields results from people desperate for help with their crashes. (Being deeply unpleasant to reverse-engineer definitely doesn’t help things, though some have tried.) Apple has several videos that go over the high-level design, but unsurprisingly they shy away from mentioning the existence of AttributeGraph itself. Other developers do, but only fleetingly.

This puts us in a real bind! We can Self._printChanges() all day and still not understand what is going on, especially if problems we have relate to missing updates rather than too many of them. To be honest, figuring out what AttributeGraph is doing internally is not all that useful unless it is not working correctly. We aren’t going to be calling those private APIs anyways, at least not easily, so there’s not much point exploring them. What’s more important is understanding what SwiftUI does and how the dependencies need to be set up to support that. We can take a leaf out of the generative AI playbook and go with the approach of just making guesses as how things are implemented. Unlike AI, we can also test our theories. We won’t know whether our speculation is right, but we can definitely check to make sure we’re not wrong!

Create a browser that renders content using an alternative browser engine. A web browser loads content and code from remote — and potentially untrusted — servers. Design your browser app to isolate access to operating system resources, the data of the person using the app, and untrusted data from the web. Code defensively to reduce the risk posed by vulnerabilities in your browser code.

If you use WKWebView to render web content in your browser app, WebKit automatically distributes its work to extensions that isolate their access to important resources and data.

Whether you use WebKit or write your own alternative browser engine, you need to request the entitlement to act as a person’s default web browser. For more information, see Preparing your app to be the default web browser.

SwiftUI has an undocumented system for interacting with collections of View types known as VariadicView. The enum _VariadicView is the entry point to this system, which includes other types like _VariadicView_MultiViewRoot and _VariadicView.Tree. The details of these were explored in a great post from MovingParts and there have been a few other helpful blogs about it.

When I first read about it, I didn’t see the applications to my code. As with most SwiftUI, it relies heavily on generics and can be difficult to see how to use it just from reading the API. Since then, I’ve made it a core part of SnapshotPreviews and learned that, despite being a private API, it is very safe to use in production — in fact, many popular apps use it extensively.

This post will explain the specific use case I found for extracting snapshots from SwiftUI previews. Hopefully a concrete example will inspire others to use this powerful SwiftUI feature!

  1. Choose an existing SF Symbol (book.fill)
  2. Right click + "Duplicate as Custom Symbol"
  3. In Custom Symbols, right click + "Combine Symbol with Component"
  4. Select the component your want (badge.plus)

In visionOS, content can be displayed in windows, volumes, and spaces.

Windows and spaces generally work as advertised, but volumes have several limitations you should be aware of before designing your app around them.

The following list of issues applies to visionOS 1.0 and 1.1 beta. I’ll keep it updated as new visionOS versions are released

A utility for transforming spatial media.

As of January 2024, Apple's MV-HEVC format for stereoscopic video is very new and barely supported by anything. However, there are millions of iPhones (iPhone 15 Pro/Pro Max) that can capture spatial video already. There was no available FOSS tool capable of splitting the stereo pair, especially not in formats suited for post-production. Upon public request, the ability to create MV-HEVC files from two separate input files was also added.

Yeah, nobody remembers this, even if they’ve heard about it before. .values is just so easy to reach for. And the bug is a subtle race condition that drops messages. And you can’t easily unit test for it. And the compiler probably can’t warn you about it. And this problem exists in any situation where an AsyncSequence “pushes” values, which is basically every observation pattern, even without Combine.

And so I struggle with whether to encourage for-await. Every time you see it, you need to think pretty hard about what’s going on in this specific case. And unfortunately, that’s kind of true of AsyncSequence generally. I’m not sure what to think about this yet. Most of my bigger projects use Combine for these kinds of things currently, and it “just works” including unsubscribing automatically when the AnyCancellable is deinited (another thing that’s easy to mess up with for-await). I just don’t know yet.

David Corfield made a very interesting observation: the three types of logical reasoning of Peirce’s, deduction, induction, abduction, correspond to three very elementary operations in category theory: composition, extension and lifting.

I was inspired by that discovery to finish working on a project I had long been putting off: documenting all the URLs supported by the Settings app in iOS and iPadOS.

Optics are bidirectional data accessors that capture data transformation patterns such as accessing subfields or iterating over containers. Profunctor optics are a particular choice of representation supporting modularity, meaning that we can construct accessors for complex structures by combining simpler ones. Profunctor optics have previously been studied only in an unenriched and non-mixed setting, in which both directions of access are modelled in the same category. However, functional programming languages are arguably better described by enriched categories; and we have found that some structures in the literature are actually mixed optics, with access directions modelled in different categories. Our work generalizes a classic result by Pastro and Street on Tambara theory and uses it to describe mixed V-enriched profunctor optics and to endow them with V-category structure. We provide some original families of optics and derivations, including an elementary one for traversals. Finally, we discuss a Haskell implementation.

Owl is an experiment in human-computer interaction using wearable devices to observe our lives and extract information and insights from them using AI. Presently, only audio and location are captured, but we plan to incorporate vision and other modalities as well. The objectives of the project are, broadly speaking:

  1. Develop an always-on AI system that is useful, unlocking new ways to enhance our productivity, our understanding of ourselves and the world around us, and ability to connect with others.
  2. Implement specific use cases for always-on AI (e.g., productivity and memory enhancement, knowledge capture and sharing, health, etc.)
  3. Explore human-computer interaction questions: user experience, interface design, privacy, security.

There are three major components to this project:

  1. Wearable capture devices. These include semi-custom development boards (with some assembly required) as well as off-the-shelf products like Apple Watch. We would like to develop fully custom open source hardware.
  2. AI server.
  3. Presentation clients. Applications that display information gathered by the system (e.g., transcripts, conversation summaries) and allow interaction with an online assistant. Currently, a mobile app and web app are included.

Today we are announcing the most significant cryptographic security upgrade in iMessage history with the introduction of PQ3, a groundbreaking post-quantum cryptographic protocol that advances the state of the art of end-to-end secure messaging. With compromise-resilient encryption and extensive defenses against even highly sophisticated quantum attacks, PQ3 is the first messaging protocol to reach what we call Level 3 security — providing protocol protections that surpass those in all other widely deployed messaging apps. To our knowledge, PQ3 has the strongest security properties of any at-scale messaging protocol in the world.

Create and manipulate 3D mathematical primitives.

The Spatial module is a lightweight 3D mathematical library that provides a simple API for working with 3D primitives. Much of its functionality is similar to the 2D geometry support in Core Graphics, but in three dimensions.

The Swift programming language has a lot of potential to be used for machine learning research because it combines the ease of use and high-level syntax of a language like Python with the speed of a compiled language like C++.

MLX is an array framework for machine learning research on Apple silicon. MLX is intended for research and not for production deployment of models in apps.

MLX Swift expands MLX to the Swift language, making experimentation on Apple silicon easier for ML researchers.

As part of this release we are including:

  • A comprehensive Swift API for MLX core
  • Higher level neural network and optimizers packages
  • An example of text generation with Mistral 7B
  • An example of MNIST training
  • A C API to MLX which acts as the bridge between Swift and the C++ core

We are releasing all of the above under a permissive MIT license.

This is a big step to enable ML researchers to experiment using Swift.

Useful utilities and services over DNS

dns.toys is a DNS server that takes creative liberties with the DNS protocol to offer handy utilities and services that are easily accessible via the command line.

During the sessions, the presenter shared that somewhere in the coming year (2024) Apple would start requiring privacy manifests in signed XCFrameworks. There was little concrete detail available then, and I’ve been waiting since for more information on how to comply. I expected documentation at least, and was hoping for an update in Xcode — specifically the xcodebuild command — to add an option that accepted a path to a manifest and included it appropriately. So far, nothing from Apple on that front.

In this post, I will talk about inference rules, particularly in the field of programming language theory. The first question to get out of the way is “what on earth is an inference rule?”. The answer is simple: an inference rule is just a way of writing “if … then …”. When writing an inference rule, we write the “if” stuff above a line, and the “then” stuff below the line. Really, that’s all there is to it.

Version numbers are hard to get right. Semantic Versioning (SemVer) communicates backward compatibility via version numbers which often lead to a false sense of security and broken promises. Calendar Versioning (CalVer) sits at the other extreme of communicating almost no useful information at all.

Going forward I plan to version the projects I work on in a way that communicates how much effort I expect a user will need to spend to adopt the new version. I’m going to refer to that scheme as Intended Effort Versioning (EffVer for short).

List Swift compiler upcoming and experimental feature flags.

Dev Mode is a new space in Figma for developers with features that help you translate designs into code, faster

“UI is a function of state” is a pretty popular saying in the front-end world. In context (pun intended), that’s typically referring to application or component state. I thought I’d pull that thread a little further and explore all the states that can effect the UI layer…

We built this website to visually explain how the SwiftUI layout system works, and we hope you find it useful. We welcome any feedback, positive or negative, so please send us an email if you have anything to share. We're planning to build out this site over the next few months, so if you want to stay updated, subscribe to our mailing list below.

Watch and record your own custom channels.

Use streaming sources to create channels right on your TV. Security cams, web cams, open internet streams, SAT>IP devices, and more.

We present a systematic embedding of algebraic data types and their (recursive) processing using pattern-matching, and illustrate on examples of sums and recursive sums of products (strict and lazy trees). The method preserves all advantages of the tagless-final style, in particular, multiple interpretations -- such as evaluating the same DSL term strictly and non-strictly, and pretty-printing it as OCaml and Lua code. In effect, we may write Lua code with patter-matching and type safety. As another application, we investigate the efficiency of emulating left-fold via right-fold, in call-by-value, call-by-name and call-by-need.

Practical solutions to problems with Swift Concurrency

Swift Concurrency can be really hard to use. I thought it could be handy to document and share solutions and hazards you might face along the way. I am absolutely not saying this is comprehensive, or that the solutions presented are great. I'm learning too. Contributions are very welcome, especially for problems!

Hazards

Quick definitions for the hazards referenced throughout the recipes:

  • Timing: More than one option is available, but can affect when events actually occur.
  • Ordering: Unstructured tasks means ordering is up to the caller. Think carefully about dependencies, multiple invocations, and cancellation.
  • Lack of Caller Control: definitions always control actor context. This is different from other threading models, and you cannot alter definitions you do not control.
  • Sendability: types that cross isolation domains must be sendable. This isn't always easy, and for types you do not control, not possible.
  • Blocking: Swift concurrency uses a fixed-size thread pool. Tying up background threads can lead to lag and even deadlock.
  • Availability: Concurrency is evolving rapidly, and some APIs require the latest SDK.
  • Async virality: Making a function async affects all its callsites. This can result in a large number of changes, each of which could, itself, affect subsequence callsites.
  • Actor Reentrancy: More than one thread can enter an Actor's async methods. An actor's state can change across awaits.

Create 64-bit ARM assembly language instructions that adhere to the application binary interface (ABI) that Apple platforms support.

The ARM architecture defines rules for how to call functions, manage the stack, and perform other operations. If part of your code includes ARM assembly instructions, you must adhere to these rules in order for your code to interoperate correctly with compiler-generated code. Similarly, if you write a compiler, the machine instructions you generate must adhere to these rules. If you don’t adhere to them, your code may behave unexpectedly or even crash.

Apple platforms diverge from the standard 64-bit ARM architecture in a few specific ways. Apart from these small differences, iOS, tvOS, and macOS adhere to the rest of the 64-bit ARM specification. For information about the ARM64 specification, including the Procedure Call Standard for the ARM 64-bit Architecture (AArch64), go to https://developer.arm.com.

Find patterns in crash reports that identify common problems, and investigate the issue based on the pattern.

You can identify the causes for many app crashes by looking for specific patterns in the crash report and taking specific diagnostic actions based on what the pattern shows. To recognize patterns, you consult two sections available in every crash report:

  • The exception code in the Exception Information section identifies the specific way the app crashed.
  • The backtraces show what code the thread was executing at the time of the crash.

Some types of common crashes have a Diagnostic Messages section or a Last Exception Backtrace in the Backtraces section, which further describe the issue. These sections aren’t present in all crash reports. Examining the fields in a crash report describes each section and field in detail.

Compare the examples provided in this article to a crash report you’re investigating. Once you find a match, proceed to the more detailed article about that type of crash.

Determining whether your crash report contains a pattern for a common issue is the first step in diagnosing a problem. In some cases, the suggested diagnostic actions won’t identify the cause of the issue, requiring a more thorough analysis of the entire crash report. Analyzing a crash report describes how to perform a detailed analysis of a crash report.

Learn what the exception type tells you about why your app crashed.

The exception type in a crash report describes how the app terminated. It’s a key piece of information that guides how to investigate the source of the problem.

The exception types are summarized here. See the sections that follow for more information.

Identify clues in a crash report that help you diagnose problems.

A crash report is a detailed log of an app’s state when it crashed, making it a crucial resource for identifying a problem before attempting to fix it. If you’re investigating a crash that isn’t resolved by the techniques discussed in Identifying the cause of common crashes, you need to do a careful analysis of the complete crash report.

When analyzing a crash report, read the information in all sections. As you formulate the hypothesis about the cause of a crash, ask questions about what the data in each section of the crash report says to refine or disprove the hypothesis. Some clues are explicitly captured by fields in the crash report, but other clues are subtle, and require you to uncover them by noticing small details. Performing a thorough analysis of a crash report and formulating a hypothesis takes time and practice to develop, but is a critical tool for making your app more robust.

Understand the structure of a crash report and the information each field contains.

Learn how Apple Vision Pro and visionOS protect your data

In this article we have explored how we can bridge from callback-based code or delegate-based code into async/await. We learned how to use checked continuations to do so, and we enforced the idea of what a continuation actually is.

With this, you should now understand all the essentials of async/await. You are now ready to tackle actual concurrency, and next week we will start talking about that, starting with structured concurrency. You will learn how to run many tasks in parallel and how to process such results.

In SwiftUI we can create smooth transitions between views from one state to another with the Matched Geometry Effect. Using unique identifiers we can blend the geometry of two views with the same identifier creating an animated transition. Transitions like this can be useful for navigation or changing the state of UI elements.

To implement it on your user interface you must:

  1. Define the namespace that will be used to synchronize the geometry of the views;
  2. Define the initial and final states of the views that will be animated;
  3. Use the proper view modifier to identify the initial and final states for the matched geometry transition to take effect;
  4. Trigger the transition.

These cards are meant to supplement your studies during your technical job search - or are great for people learning data structures. Included are 46 digital cards that cover the data structures you need to know for technical interviews.

Are there any examples in the history of mathematics of a mathematical proof that was initially reviewed and widely accepted as valid, only to be disproved a significant amount of time later, possibly even after being used in proofs of other results?

(I realise it's a bit vague, but if there is significant doubt in the mathematical community then the alleged proof probably doesn't qualify. What I'm interested in is whether the human race as a whole is known to have ever made serious mathematical blunders.)

In telecommunications some very large and very small values are used. To make writing of these numbers easier use is made of a prefix. The prefix gives a value with which the value must be multiplied.

Some prefixes are also used in digital communications and computer technology but they have a slightly different value because they are based on a power of 2.

Prefix Analog Value Digital Value
p (pico) 10-12 -
n (nano) 10-9 -
µ (micro) 10-6 -
m (milli) 10-3 -
k (kilo) 103 (1000) 210 (1024)
M (mega) 106 (1,000,000) 220 (1,048,576)
G (Giga) 109 (1,000,000,000) 230 (1,073,741,824)
T (Tera) 1012 (1,000,000,000,000) 240 (1,099,511,627,776)

Typestate is a powerful design pattern that emerged in languages with advanced type systems and strict memory ownership models, notably Rust. It is now available to Swift programmers with the introduction of Noncopyable types in Swift 5.9.

Typestate brings the concept of a State Machine into the type system. In this pattern, the state of an object is encoded in its type, and transitions between states are reflected in the type system.

Crucially, Typestate helps catch serious logic mistakes at compile time rather than runtime. This makes it great for designing mission-critical systems, especially where human safety is involved (see the Tesla car example).

Like with most design patterns, the best way to understand it is by examining some examples.

Typestate is a powerful design pattern that brings great type and memory safety to your programs. It can drastically reduce the possibility of critical bugs and undefined behaviours by catching them at compile time. It can also reduce the reliance on inherently skippable quality control measures, such as tests, linters, code reviews, etc.

To decide if Typestate is a good choice for your use case, see if ANY of these apply:

  • Your program behaves like a state machine. You can identify distinct states and transitions between them.
  • Your program needs to enforce a strict order of operations, where out-of-order operations can lead to bugs or undefined behaviour.
  • Your program manages resources that have open/use/close semantics. Typical examples: files, connections, streams, audio/video sessions, etc. Resources that can't be used before they are acquired, and that must be relinquished after use.
  • Your program manages mutually exclusive systems. See the Tesla car example below, where the gas pedal either accelerates the real car or the video game car, depending on the state.

Example implementation

Setting your own vertical or horizontal alignment guide isn’t something I’ve thought about much when writing my own SwiftUI code. When they were announced, and later demo’d during a dub dub session in SwiftUI’s early days, I remember thinking, “Yeah, I don’t get that. Will check out later.”

Lately, though, I’ve seen two novel use cases where using one is exactly what was required. Or, at the very least, it solved a problem in a manageable way.

Rather than write out a bunch of code myself, I’ll get straight to it and show you the examples from other talented developers.

For Unison Cloud, we have simple prices that fit on a notecard. We don't pass the bizarrely complicated pricing structure of infra providers on to our customers, since most companies don't want or need that. They want simple and predictable pricing, and good productivity. For a larger enterprise deal, we're of course happy to negotiate a more granular pricing scheme (and we can suggest some options), as long as you aren't asking us to sell you $1 at a 20% discount. If you do need something custom, please get in touch.

To keep our costs under control, we make use of rate limiting so one user can't monopolize our resources or render an entire service unprofitable. But there are a range of limits which still allow us to operate profitably, and for bigger customers looking to optimize their spending, we're again happy to work out some custom deal.

This makes a lot more sense to us than having a complicated default pricing scheme that only serves the needs of the 1%. Big accounts are likely to want a custom deal anyway for their unique needs, so why not keep the default prices simple and leave the complexity for custom deals? Everybody wins.

Besides keeping pricing simple, we are actually serious about improving developer productivity. On Unison Cloud, there's no packaging or building containers, no boilerplate talking between services, no tedious code getting data stashed in durable storage and read back later, and lots more. We think the cloud should be simple and delightful to use, and we're making it happen.

Around the nLab and elsewhere, one occasionally sees an expression “the walking _____” where the blank is some mathematical concept. This is a colloquial way of referring to an archetypal model of the concept or type, and usually refers to a free or initial form of such a kind of structure.

Pronunciation is just as in ‘John is a walking almanac’ or ‘Eugene Levy is a walking pair of eyebrows’. The term is believed to have been introduced by James Dolan.

Sometimes, “the free-living _____” or “the free-standing _____” is used instead; this terminology is probably much older.

import SwiftUI

extension View {
func debugLog(_ name: String) -> some View {
MyLayout(name: name) { self }
}
}

struct MyLayout: Layout {
var name: String

    func sizeThatFits(proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) -> CGSize {
    assert(subviews.count == 1)
    let result = subviews[0].sizeThatFits(proposal)
    print(name, proposal, result)
    return result
}

func placeSubviews(in bounds: CGRect, proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) {
    subviews[0].place(at: bounds.origin, proposal: proposal)
}
}

Learn about changes and features included in the firmware updates for your AirPods.

Firmware updates are delivered automatically while your AirPods are charging and in Bluetooth range of your iPhone, iPad, or Mac that's connected to Wi-Fi. You can also use your iPhone, iPad, or Mac to check that your AirPods have the latest version.

To use your iPhone or iPad to check that your AirPods are up to date, make sure that you have the latest version of iOS or iPadOS. Go to Settings > Bluetooth, then tap the Info button next to the name of your AirPods. Scroll down to the About section to find the firmware version.

To use your Mac to check that your AirPods are up to date, make sure that you have the latest version of macOS. Press and hold the Option key while choosing Apple menu  > System Information. Click Bluetooth, then look under your AirPods for the firmware version. With macOS Ventura or later, you can also choose Apple menu  > System Settings, click Bluetooth, then click the Info button next to the name of your AirPods.

If you don't have an Apple device nearby, you can set up an appointment at an Apple Store or with an Apple Authorized Service Provider to update your firmware.

Sound Actions allows you to make sounds to perform actions such as the following:

  • Tap
  • Recenter apps
  • Open Capture
  • Access Control Center
  • Adjust the volume
  • Take a screenshot
  • Scroll up or down
  • Activate Siri

  1. Go to Settings > Accessibility > Interaction > Sound Actions.
  2. Tap a sound, then assign an action to it. You can also tap Practice to practice sounds before assigning one to an action.

To animate the rotation, scale or translation of an entity is quite straight forward. The Transform component has a move method. The code below moves an entity 0.5m along the X axis.

let transform = Transform(scale: .one, simd_quatf(), translation: [0.5, 0, 0])
entity.move(to: transform, relativeTo: entity, duration: 1, timingFunction: .easeInOut)

To do something at the end of the animation you add a subscription to the scene's publisher:

scene.publisher(for: AnimationEvents.PlaybackCompleted.self, on: entity).sink(receiveValue: { event in
print("Animation finished")
})

A class your Metal app uses to register for callbacks to synchronize its animations for a display.

CAMetalDisplayLink instances are a specialized way to interact with variable-rate displays when you need more control over the timing window to render your app’s frames. Controlling the timing window and rendering delay for frames can help you achieve smoother frame rates and avoid visual artifacts.

[!Tip] When working with less visually intensive apps or apps which don’t use Metal, use CADisplayLink to handle variable refresh rates.

Your app initializes a new Metal display link by providing a target CAMetalLayer. Set this instance’s delegate property to an implementation that encodes the rendering work for Metal to perform. With a set delegate, synchronize the display with a run loop to perform rendering on by calling the add(to:forMode:) method. Once you associate the display link with a run loop, the system calls the delegate’s metalDisplayLink(_:needsUpdate:) method to request new frames. This method receives update requests based on the preferredFrameRateRange and preferredFrameLatency of the display link. The system makes a best effort to make callbacks at appropriate times. Your app should complete any commits to the Metal device’s MTLCommandQueue for rendering the display layer before calling present() on a drawable element. Your app can disable notifications by setting isPaused to true. When your app finishes with a display link, call invalidate() to remove it from all run loops and the target.

A timer object that allows your app to synchronize its drawing to the refresh rate of the display.

Your app initializes a new display link by providing a target object and a selector to call when the system updates the screen. To synchronize your display loop with the display, your application adds it to a run loop using the add(to:forMode:) method.

Once you associate the display link with a run loop, the system calls the selector on the target when the screen’s contents need to update. The target can read the display link’s timestamp property to retrieve the time the system displayed the previous frame. For example, an app that displays movies might use timestamp to calculate which video frame to display next. An app that performs its own animations might use timestamp to determine where and how visible objects appear in the upcoming frame.

The duration property provides the amount of time between frames at the maximumFramesPerSecond. To calculate the actual frame duration, use targetTimestamp - timestamp. You can use this value in your app to calculate the frame rate of the display, the approximate time the system displays the next frame, and to adjust the drawing behavior so that the next frame is ready in time to display.

Your app can disable notifications by setting isPaused to true. Also, if your app can’t provide frames in the time the system provides, you may want to choose a slower frame rate. An app with a slower but consistent frame rate appears smoother to the user than an app that skips frames. You can define the number of frames per second by setting preferredFramesPerSecond.

When your app finishes with a display link, call invalidate() to remove it from all run loops and to disassociate it from the target.

The code listing below shows how to create a display link and add it to the current run loop. The display link invokes the step function, which prints the target timestamp with each screen update.

func createDisplayLink() {
    let displaylink = CADisplayLink(target: self, selector: #selector(step))
    displaylink.add(to: .current, forMode: .defaultRunLoopMode)
}
    
func step(displaylink: CADisplayLink) {
    print(displaylink.targetTimestamp)
}

You shouldn’t subclass CADisplayLink.

In the early-to-mid 2010s, there was a renaissance in languages exploring new ways of doing concurrency. In the midst of this renaissance, one abstraction for achieving concurrent operations that was developed was the “future” or “promise” abstraction, which represented a unit of work that will maybe eventually complete, allowing the programmer to use this to manipulate control flow in their program. Building on this, syntactic sugar called “async/await” was introduced to take futures and shape them into the ordinary, linear control flow that is most common. This approach has been adopted in many mainstream languages, a series of developments that has been controversial among practitioners.

The story of our site-wide redesign and web tech and accessibility wins.

We are delighted to announce the open source first release of Pkl (pronounced Pickle), a programming language for producing configuration.

When thinking about configuration, it is common to think of static languages like JSON, YAML, or Property Lists. While these languages have their own merits, they tend to fall short when configuration grows in complexity. For example, their lack of expressivity means that code often gets repeated. Additionally, it can be easy to make configuration errors, because these formats do not provide any validation of their own.

To address these shortcomings, sometimes formats get enhanced by ancillary tools that add special logic. For example, perhaps there’s a need to make code more DRY, so a special property is introduced that understands how to resolve references, and merge objects together. Alternatively, there’s a need to guard against validation errors, so some new way is created to validate a configuration value against an expected type. Before long, these formats almost become programming languages, but ones that are hard to understand and hard to write.

On the other end of the spectrum, a general-purpose language might be used instead. Languages like Kotlin, Ruby, or JavaScript become the basis for DSLs that generate configuration data. While these languages are tremendously powerful, they can be awkward to use for describing configuration, because they are not oriented around defining and validating data. Additionally, these DSLs tend to be tied to their own ecosystems. It is a hard sell to use a Kotlin DSL as the configuration layer for an application written in Go.

We created Pkl because we think that configuration is best expressed as a blend between a static language and a general-purpose programming language. We want to take the best of both worlds; to provide a language that is declarative and simple to read and write, but enhanced with capabilities borrowed from general-purpose languages. When writing Pkl, you are able to use the language features you’d expect, like classes, functions, conditionals, and loops. You can build abstraction layers, and share code by creating packages and publishing them. Most importantly, you can use Pkl to meet many different types of configuration needs. It can be used to produce static configuration files in any format, or be embedded as a library into another application runtime.

We designed Pkl with three overarching goals:

  • To provide safety by catching validation errors before deployment.
  • To scale from simple to complex use-cases.
  • To be a joy to write, with our best-in-class IDE integrations.

CMTime is a struct representing a time value such as a timestamp or duration. CMTime is defined by CoreMedia and it is often used by AVFoundation API interfaces.

Because the interface of CMTime is horrible, and its documentation is even worse, here you have a few use cases to make it easier to work with CMTime in a daily basis.

Apple Vision Pro – available today in the US – is a wearable spatial computer that blends the digital with the physical, heralding a whole new platform for experiencing technology

Explore the visionOS simulator's debug modes in Xcode for spatial computing apps.

Let's explore the debugging modes within the visionOS simulator in Xcode, tailored for developers working on spatial computing applications. Understanding these modes is crucial for effectively visualizing and troubleshooting applications in the unique environment that Vision Pro offers.

Developers have been working hard to create or update their apps for Apple Vision Pro. Here's a list of selected apps you might want to try out.

Strap in, cancel your Netflix, and load up some amazing apps!

Prefer to see ALL the apps? There's a great list of supported visionOS apps in this Google doc. Worth a bookmark when you're looking for new ideas.

It’s not often that we see a new platform get introduced to the world. Over the last two decades, there have really been only two platforms that focus on general-purpose computing. We might be witnessing the beginning of the third today.

When there is a new platform, it’s always cool to see what are all the new possibilities it enables. Going through the App Store for visionOS, I’m already surprised by the creativity of some developers, and also amazed by the new experiences that take advantage of the platform. I’m really looking forward to seeing what other cool things people create on visionOS.

Welcome to the era of spatial computing.

Here are all the native third-party apps available on day one for visionOS that I was able to find through the Apple Media Service API.

It’s really impressive to see the developers working through all the challenges and complications to bring something new to the world, and congratulations on launching these apps! It wouldn’t be a general-purpose computer if there is no third-party apps 😛

The engineer side of me really hope this platform can success as the technology packed in Apple Vision Pro is truly impressive. However, like many other developers, I think Apple’s behavior around App Store and app review is really alienating developers and pushing them further away. Apple’s view of iPhone can success without any third-party apps is just so out of touch to me (RIP Windows Phone 🥲). Maybe I'm naive, I hope with this new platform, we can meet somewhere in the middle and have both parties appreciate each other’s role played in making the platform successful. (The current relationship is definitely not healthy, as I have to worry about the possibility of retaliation from Apple just for writing this 🙃)

React Native is not a single company initiative, and its modularity allows many to step up and provide a solution for each aspect. Some libraries are gaining popularity, some solutions are fading from the scene, and some limitations are becoming more apparent.

All of this can make it difficult for developers to choose the right tools and libraries for their projects and be confident in their decisions.

The second edition of the survey presents the trends and outlines the new initiatives happening in the React Native ecosystem. Starting with this edition, we can examine the popularity and usability of specific solutions year over year. Some of the trends were expected, while others are complete surprises. We find that some aspects are getting more attention from contributors than ever before - take a look at the styling or debugging sections (and others!). The first edition of the survey was very successful. Major players in the ecosystem are reading and responding to the data. We've also grown by more than 500 new respondents year over year, reaching nearly 2400 unique respondents. By reaching more and more developers each year, we become a torch that guides people into the depths of the React Native ecosystem.

Enter the second edition of the State of React Native survey. Designed to consolidate opinions and provide meaningful insight into a variety of aspects that React Native developers deal with on a daily basis. I'm confident that the data you'll find here will serve you well the next time you need to choose the right state management solution for your project, or make any other React Native-related decision.

January

Discover spatial computing apps. Enjoy groundbreaking immersive experiences, explore new universes and get to know visionOS apps that are available on Apple Vision Pro.

Learn the fundamental concepts of SwiftNIO, such as EventLoops and nonblocking I/O

The Swift language constructs self, Self, and Self.self can sometimes be a source of confusion, even for experienced developers. It's not uncommon to pause and recall what each is referring to in different contexts. In this post I aim to provide some straightforward examples that clarify the distinct roles and uses of these three constructs. Whether it's managing instance references, adhering to protocol conformance, or accessing metatype information, understanding these concepts is key to harnessing the full potential of Swift in our projects.

Create an alternative app marketplace or distribute your app on one.

An alternative app marketplace is an iOS app from which someone can install apps from other developers, as an alternative to the App Store. MarketplaceKit enables alternative app marketplaces to install the apps they host on peoples’ devices. The framework also supports features that compose quality browsing and installation experience, such as Spotlight Search and App Thinning. With the framework, you can manage existing app installations, convey download progress, update app licensing, and customize app search behavior.

In addition to alternative app marketplaces, this framework also serves:

  • Web browsers, specifically by requesting alternative app marketplace installation triggered through an alternative marketplace webpage.
  • Apps that distribute from an alternative app marketplace, by determining the installation source at runtime. This allows a marketplace-hosted app to branch its functionality depending on the marketplace from which it installs on a particular device, to accommodate differences on either marketplace.

To learn about the criteria and request the marketplace entitlement, see Getting started as an alternative app marketplace in the European Union.

Record an AR session in Reality Composer and replay it in your ARKit app.

ARKit apps use video feeds and sensor data from an iOS device to understand the world around the device. This reliance on real-world input makes the testing of an AR experience challenging because real-world input is never the same across two AR sessions. Differences in lighting conditions, device motion, and the location of nearby objects all change how RealityKit understands and renders the scene each time.

To provide consistent data to your AR app, you can record a session using Reality Composer, then use the recorded camera and sensor data to drive your app when running from Xcode.

Communication between views in SwiftUI can be tricky. As explained in a previous story about SwiftUI State monitoring, SwiftUI PropertyWrappers offer us a lot by hiding some complexity of managing the source of truth for our views. However, they can also bring confusion regarding state management and how to communicate between views.

Here’s a quick recap of the most common options.

Apple is sharing new business terms available for developers’ apps in the European Union. Developers can choose to adopt these new business terms, or stay on Apple’s existing terms. For existing developers who want nothing to change for them — from how the App Store works currently and in the rest of the world — no action is needed, and they can continue to distribute their apps only on the App Store and use its private and secure In-App Purchase system. Developers must adopt the new business terms for EU apps to use the new capabilities for alternative distribution or alternative payment processing.

Swift 5.2 brought some awesome changes to the package manager thanks to SE-0226 that massively improved the handling of dependencies. Going forward no longer would you face the spinning resolution of doom if you had dependency conflicts. And no longer would you have to download all transitive dependencies if some were only used in testing of your dependencies.

The Apple TV app lets you browse content from a variety of video services without switching from one app to the next. It provides movies, shows, and handpicked recommendations. The app is on iOS and tvOS devices— so you can watch wherever you go.

Changes to iOS

In the EU, Apple is making a number of changes to iOS to comply with the DMA. For developers, those changes include new options for distributing apps. The coming changes to iOS in the EU include:

  • New options for distributing iOS apps from alternative app marketplaces — including new APIs and tools that enable developers to offer their iOS apps for download from alternative app marketplaces.
  • New framework and APIs for creating alternative app marketplaces — enabling marketplace developers to install apps and manage updates on behalf of other developers from their dedicated marketplace app.
  • New frameworks and APIs for alternative browser engines — enabling developers to use browser engines, other than WebKit, for browser apps and apps with in-app browsing experiences.
  • Interoperability request form — where developers can submit additional requests for interoperability with iPhone and iOS hardware and software features.

As announced by the European Commission, Apple is also sharing DMA-compliant changes impacting contactless payments. That includes new APIs enabling developers to use NFC technology in their banking and wallet apps throughout the European Economic Area. And in the EU, Apple is introducing new controls that allow users to select a third-party contactless payment app — or an alternative app marketplace — as their default.

Inevitably, the new options for developers’ EU apps create new risks to Apple users and their devices. Apple can’t eliminate those risks, but within the DMA’s constraints, the company will take steps to reduce them. These safeguards will be in place when users download iOS 17.4 or later, beginning in March, and include:

  • Notarization for iOS apps — a baseline review that applies to all apps, regardless of their distribution channel, focused on platform integrity and protecting users. Notarization involves a combination of automated checks and human review.
  • App installation sheets — that use information from the Notarization process to provide at-a-glance descriptions of apps and their functionality before download, including the developer, screenshots, and other essential information.
  • Authorization for marketplace developers — to ensure marketplace developers commit to ongoing requirements that help protect users and developers.
  • Additional malware protections — that prevent iOS apps from launching if they’re found to contain malware after being installed to a user’s device.

These protections — including Notarization for iOS apps, and authorization for marketplace developers — help reduce some of the privacy and security risks to iOS users in the EU. That includes threats like malware or malicious code, and risks of installing apps that misrepresent their functionality or the responsible developer.

Changes to Safari

Today, iOS users already have the ability to set a third-party web browser — other than Safari — as their default. Reflecting the DMA’s requirements, Apple is also introducing a new choice screen that will surface when users first open Safari in iOS 17.4 or later. That screen will prompt EU users to choose a default browser from a list of options.

This change is a result of the DMA’s requirements, and means that EU users will be confronted with a list of default browsers before they have the opportunity to understand the options available to them. The screen also interrupts EU users’ experience the first time they open Safari intending to navigate to a webpage.

Changes to the App Store

On the App Store, Apple is sharing a number of changes for developers with apps in the EU, affecting apps across Apple’s operating systems — including iOS, iPadOS, macOS, watchOS, and tvOS. The changes also include new disclosures informing EU users of the risks associated with using alternatives to the App Store’s secure payment processing.

For developers, those changes include:

  • New options for using payment service providers (PSPs) — within a developer’s app to process payments for digital goods and services.
  • New options for processing payments via link-out — where users can complete a transaction for digital goods and services on the developer’s external website. Developers can also inform EU users of promotions, discounts, and other deals available outside of their apps.
  • Business planning tools — for developers to estimate fees and understand metrics associated with Apple’s new business terms for apps in the EU.

The changes also include new steps to protect and inform EU users, including:

  • App Store product page labels — that inform users when an app they’re downloading uses alternative payment processing.
  • In-app disclosure sheets — that let users know when they are no longer transacting with Apple, and when a developer is directing them to transact using an alternative payment processor.
  • New App Review processes — to verify that developers accurately communicate information about transactions that use alternative payment processors.
  • Expanded data portability on Apple’s Data & Privacy site — where EU users can retrieve new data about their usage of the App Store and export it to an authorized third party.

For apps that use alternative payment processing, Apple will not be able to issue refunds, and will have less ability to support customers encountering issues, scams, or fraud. Helpful App Store features — like Report a Problem, Family Sharing, and Ask to Buy — will also not reflect these transactions. Users may have to share their payment information with additional parties, creating more opportunities for bad actors to steal sensitive financial information. And on the App Store, users’ purchase history and subscription management will only reflect transactions made using the App Store’s In-App Purchase system.

  • vmmap --summary X.memgraph >
  • vmmap X.memgraph | rg "MEMORY REGION NAME"
  • vmmap --verbose X.memgraph | rg "MEMORYREGION"
  • leaks --traceTree 0xSTARTINGMEMORYADDRESS
  • malloc_history X.memgraph --fullStacks 0xSTARTINGMEMORYADRESS
  • Other helpful commands
    • vmmap --pages X.memgraph
    • leaks X.memgraph
    • heap X.memgraph
    • heap X.memgraph -sortBySize
    • heap X.memgraph -addresses all | <classes-pattern>
> ```swift
> .safeAreaInset(edge: •top, spacing: 0) {
>     if canFilterTimeline, pinnedFilters. isEmpty {
>        TimelineQuickAccessPills(pinnedFilters: $pinnedFilters, timeline: $timeline)
>        .padding(vertical, 8)
 .padding(horizontal, .layoutPadding)

.background(theme.primaryBackgroundColor.opacity(0.50)) .background(Material.regular) > } > } > .if(canFilterTimeline && !pinnedFilters.isEmpty) { view in view.toolbarBackground(.hidden, for: .navigationBar) } > ```

This dashboard tracks technical issues in major software platforms which disadvantage Firefox relative to the first-party browser. We consider aspects like security, stability, performance, and functionality, and propose changes to create a more level playing field.

Further discussion on the live issues can be found in our platform-tilt issue tracker.

In this blog post, I'll explain when and where you can use Swift's new package access modifier. I'll also give an outlook on plans from Apple to extend its usefulness for closed-code enterprise SDKs.

This should be everything you need to decode simple QR codes by hand. You can now either press the "Random code" button at the top to practice on short English words, or go find a QR code in the wild, and scan it using the "Scan code" button!

SwiftUI’s LazyVGrid and LazyHGrid offer powerful tools for creating dynamic and responsive grid layouts in iOS apps. Starting with the basics of LazyVGrid, we explored how different GridItem types like Adaptive, Fixed, and Flexible can shape your grid’s behavior and appearance. We then delved into LazyHGrid, highlighting its horizontal layout capabilities, which complement the vertical nature of LazyVGrid. The section on customizing grid spacing and alignment emphasized the importance of these elements in enhancing the visual appeal and functionality of your grids. By mastering these grid layouts, you can create diverse and engaging interfaces that are both visually appealing and user-friendly, significantly elevating the user experience in your SwiftUI applications.

Communication between views in SwiftUI can be tricky. As explained in a previous story about SwiftUI State monitoring, SwiftUI PropertyWrappers offer us a lot by hiding some complexity of managing the source of truth for our views. However, they can also bring confusion regarding state management and how to communicate between views.

There are two key insights here.

  1. the alignment guide passed to the .alignmentGuide(…) method refers to the container, not the view we’re modifying.
  2. the alignment guides influence the layout of the cross dimension of the stack. So for a VStack you can control the horizontal alignment (but clearly the bars here are still stacked vertically). For an HStack you can control the vertical alignment.

So in my case I need a ZStack so I can have them vertically aligned and horizontally offset from each other in a way I can modify with the alignment guide.

ML models are probabilistic. Imagine that you want to know what’s the best cuisine in the world. If you ask someone this question twice, a minute apart, their answers both times should be the same. If you ask a model the same question twice, its answer can change. If the model thinks that Vietnamese cuisine has a 70% chance of being the best cuisine and Italian cuisine has a 30% chance, it’ll answer “Vietnamese” 70% of the time, and “Italian” 30%.

This probabilistic nature makes AI great for creative tasks. What is creativity but the ability to explore beyond the common possibilities, to think outside the box?

However, this probabilistic nature also causes inconsistency and hallucinations. It’s fatal for tasks that depend on factuality. Recently, I went over 3 months’ worth of customer support requests of an AI startup I advise and found that ⅕ of the questions are because users don’t understand or don’t know how to work with this probabilistic nature.

To understand why AI’s responses are probabilistic, we need to understand how models generate responses, a process known as sampling (or decoding). This post consists of 3 parts.

  • Sampling: sampling strategies and sampling variables including temperature, top-k, and top-p.
  • Test time sampling: sampling multiple outputs to help improve a model’s performance.
  • Structured outputs: how to get models to generate outputs in a certain format.

A service that provides a custom communication channel between your app and a File Provider extension.

Defining the Service’s Protocol

Services let you define custom actions that are not provided by Apple’s APIs. Both the app and the File Provider extension must agree upon the service’s name and protocol. Communicate the name and protocol through an outside source (for example, posting a header file that defines both the name and protocol, or publishing a library that includes them both).

The service can be defined by either the app or the File Provider extension:

  • Apps can define a service for features they would like to use. File providers can then choose to support those features by implementing the service.
  • File Provider extensions can provide a service for the features they support. Apps can then choose to use the specified service.

When defining a service’s protocol, the parameters for each method must adhere to the following rules:

Tells the delegate that the user closed one or more of the app’s scenes from the app switcher.

When the user removes a scene from the app switcher, UIKit calls this method before discarding the scene’s associated session object altogether. (UIKit also calls this method to discard scenes that it can no longer display.) If your app isn’t running, UIKit calls this method the next time your app launches.

Use this method to update your app’s data structures and to release any resources associated with the scene. For example, you might use this method to update your app’s interface to incorporate the content associated with the scenes.

UIKit calls this method only when dismissing scenes permanently. It doesn’t call it when the system disconnects a scene to free up memory. Memory reclamation deletes the scene objects, but preserves the sessions associated with those scenes.

The @Observable Macro simplifies code at the implementation level and increases the performance of SwiftUI views by preventing unnecessary redraws. You’re no longer required to use @ObservedObject, ObservableObject, and @Published. However, you still need to use @State to create a single source of truth for model data.

This is open access book provides plenty of pleasant mathematical surprises. There are many fascinating results that do not appear in textbooks although they are accessible with a good knowledge of secondary-school mathematics. This book presents a selection of these topics including the mathematical formalization of origami, construction with straightedge and compass (and other instruments), the five- and six-color theorems, a taste of Ramsey theory and little-known theorems proved by induction.

Among the most surprising theorems are the Mohr-Mascheroni theorem that a compass alone can perform all the classical constructions with straightedge and compass, and Steiner's theorem that a straightedge alone is sufficient provided that a single circle is given. The highlight of the book is a detailed presentation of Gauss's purely algebraic proof that a regular heptadecagon (a regular polygon with seventeen sides) can be constructed with straightedge and compass.

Although the mathematics used in the book is elementary (Euclidean and analytic geometry, algebra, trigonometry), students in secondary schools and colleges, teachers, and other interested readers will relish the opportunity to confront the challenge of understanding these surprising theorems.

Supplementary material to the book can be found at motib/suprises.

C0deine is a compiler for C0. It is written in Lean 4, which allows us to express the formal semantics in the same language as the compiler itself. Hopefully, the whole compiler will be verified at some point/soon.

C0deine implements a number of sub-languages of C0 as well as fixing some bugs in the existing compiler. See this document for information about the languages themselves, as well as a list of changes/corrections. Also, here is a work-in-progress document detailing the static semantics of C0.

If you find any issues, please report them here.

Passkeys.directory is a community-driven index of websites, apps, and services that offer signing in with passkeys.

Investigate why your universal links are opening in Safari instead of your app.

Universal links use applinks, an associated domains service, to link directly to content within your app without routing through Safari or your website. If your app is installed, a universal link will open in your app. If it is not installed, the link will open in your default web browser, where your site handles the rest. If you are unfamiliar with universal links and how to support them in your code, see Supporting Associated Domains and Allowing apps and websites to link to your content.

This document outlines how to:

A `double category of relations' is defined in this paper as a cartesian equipment in which every object is suitably discrete. The main result is a characterization theorem that a `double category of relations' is equivalent to a double category of relations on a regular category when it has strong and monic tabulators and a double-categorical subobject comprehension scheme. This result is based in part on the recent characterization of double categories of spans due to Aleiferi. The overall development can be viewed as a double-categorical version of that of the notion of a "functionally complete bicategory of relations" or a "tabular allegory".

Users can turn any space into a personal theater, enjoy more than 150 3D movies, and experience the future of entertainment with Apple Immersive Video

This is a book about building applications using hypermedia systems. Hypermedia systems might seem like a strange phrase: how is hypermedia a system? Isn’t hypermedia just a way to link documents together?

In 2019, I built a work-for-hobby iOS simulator on a strict regimen of weekends and coffee. While the full details of this project will stay in-house, there’s enough I can share to hopefully be interesting!

A quick intro to the steps. There's essentially two steps in enabling universal links:

  1. Enable the entitlement. This essentially tells the app "you can open links from this specific domain". Its done in Xcode once, and forms part of your binary's meta-data.
  2. Provide a data file on the domain you want to enable links from. This is deployed to prove to the world that you, the domain owner approve of this app accepting your links.

This data file is called the Apple App Site Association file (often referred to as the AASA)

We won't go into detail of the contents here; Apple's documentation covers it well. You can find that here.

The skills you need are your intelligence, cunning, perseverance and the will to test yourself against the intricacies of multi-threaded programming in the divine language of C#. Each challenge below is a computer program of two or more threads. You take the role of the Scheduler — and a cunning one! Your objective is to exploit flaws in the programs to make them crash or otherwise malfunction.

For example, you might cause a deadlock to occur or you might schedule context switches in such a way that two threads enter the same critical section at the same time. Any action that disrupts the program this way counts as a victory for you.

You are the Scheduler — you only have one tool at your disposal: the ability to switch contexts at any time, as the total master of time and interruptions. Let's hope it is enough... it has to be, because the Parallel Wizard's armies are upon us and only you can lead the Sequentialist armies into victory!

Building Blocks of “LLM Programming”

Prompts are how one channels an LLM to do something. LLMs in a sense always have lots of “latent capability” (e.g. from their training on billions of webpages). But prompts—in a way that’s still scientifically mysterious—are what let one “engineer” what part of that capability to bring out.

There are many different ways to use prompts. One can use them, for example, to tell an LLM to “adopt a particular persona”. One can use them to effectively get the LLM to “apply a certain function” to its input. And one can use them to get the LLM to frame its output in a particular way, or to call out to tools in a certain way.

And much as functions are the building blocks for computational programming—say in the Wolfram Language—so prompts are the building blocks for “LLM programming”. And—much like functions—there are prompts that correspond to “lumps of functionality” that one can expect will be repeatedly used.

Today we’re launching the Wolfram Prompt Repository to provide a curated collection of useful community-contributed prompts—set up to be seamlessly accessible both interactively in Chat Notebooks and programmatically in things like LLMFunction:

An array of additional metadata for the player item to supplement or replace an asset’s embedded metadata.

AVPlayerViewController supports displaying the following metadata identifiers:

If you’ve worked with AVFoundation’s APIs, you’ll be familiar with CVPixelBuffer, an object which represents a single video frame. AVFoundation manages the tasks of reading, writing, and playing video frames, but the process changes when dealing with spatial video (aka MV-HEVC), which features video from two separate angles.

Loading a spatial video into an AVPlayer or AVAssetReader on iOS appears similar to loading a standard video. By default, however, the frames you receive only show one perspective (the “hero” eye view), while the alternate angle, part of the MV-HEVC file, remains uncompressed.

With iOS 17.2 and macOS 14.2, new AVFoundation APIs were introduced for handling MV-HEVC files. They make it easy to get both angles of a spatial video, but are lacking in documentation. Here’s a few tips for working with them:

An object containing information broadcast to registered observers that bridges to Notification; use NSNotification when you need reference semantics or other Foundation-specific behavior.

A few days ago, my former coworker Evan Hahn posted “The world’s smallest PNG”, an article walking through the minimum required elements of the PNG image format. He gave away the answer in the very first line:

The smallest PNG file is 67 bytes. It’s a single black pixel. However (spoilers!) he later points out that there are several valid 67-byte PNGs, such as a 1x1 all-white image, or an 8x1 all-black image, or a 1x1 gray image. All of these exploit the fact that you can’t have less than one byte of pixel data, so you might as well use all eight bits of it. Clever!

However again…are we really limited to one byte of pixel data?

(At this point you should go read Evan’s article before continuing with mine.)

Ever wanted to know how to find and fix performance issues in your app, or just how to make your app faster? In this article we go over how I made an app 19 times faster by replacing a single component, along with how to find and fix other performance related issues.

Most unidirectional architecture frameworks have a similar base class, so we model all our feature state inside that State type, whether that’s data that should trigger view renders or not.

And that’s one of the main bottlenecks of some Redux-ish architectures that tend to model all the app state in a single place: view bodies recompute even if the state change was unrelated to that view. There are certainly ways to fix that (like TCA’s ViewStore), but as always, that comes with complexity and also with a feeling that we are kind of fighting the framework.

Fortunately, the new Observation framework is here to fix this. Or not… Let’s see.

AnyView is a type-erased view, that can be handy in SwiftUI containers consisting of heterogeneous views. In these cases, you don’t need to specify the concrete type of all the views that can be in the view hierarchy. With this approach, you can avoid using generics, thus simplifying your code.

However, that can come with a performance penalty. As mentioned in a previous post, SwiftUI relies on the type of the views to compute the diffing. If it’s AnyView (which is basically a wrapped type), SwiftUI will have hard time figuring out the view’s identity and its structure, and it will just redraw the whole view, which is not really efficient. You can find more details about SwiftUI’s diffing mechanism in this great WWDC talk.

Apple also mentioned several times that we should avoid using AnyView inside a ForEach, by saying it may cause performance issues. A possible case where this can be measured is an endless list of different views, presenting different types of data (e.g. chats, activity feeds, etc). In this post, I will do some measurements using Stream’s SwiftUI chat SDK, by using its default generics-based implementation, and comparing it with a modified implementation that uses AnyView.

Combine Astro, htmx and Alpine.js to create modern web applications sending HTML over the wire, replacing the SPA JS-heavy approach with a much simpler set of mental models and workflows.

OSLog’s several logging levels exist to categorize different logging messages. The Console app and Xcode’s debugging console offer filters based on these log levels.

  • default (notice): The default log level, which is not really telling anything about the logging. It’s better to be specific by using the other log levels.

  • info: Call this function to capture information that may be helpful, but isn’t essential, for troubleshooting.

  • debug: Debug-level messages to use in a development environment while actively debugging.

  • trace: Equivalent of the debug method.

  • warning: Warning-level messages for reporting unexpected non-fatal failures.

  • error: Error-level messages for reporting critical errors and failures.

  • fault: Fault-level messages for capturing system-level or multi-process errors only.

  • critical: Functional equivalent of the fault method.

  1. Mention @[email protected] in a toot with the address of the site you want to follow.
  2. RSS Parrot looks up the link in your toot, reads the website, and retrieves the address of its RSS or Atom feed. If this is the first time the site is requested, RSS Parrot creates a new account dedicated to it. This account will send out a new toot every time a new post appears in the feed. The account's name is derived from the website's address, using only dots between the letters. The birb replies to your toot with the name of the account parroting the feed. 1. Follow the RSS Parrot account that the birb gave you. This is important and easy to forget! ;-) You'll see a toot from it in your timeline every time a new post is published on the website.

If you relatively new to functional programming but already at least somewhat familiar with higher order abstractions like Functors, Applicatives and Monads, you may find interesting to learn about Yoneda lemma. This is not something you will use in your day to day work, it’s just a relatively easy exercise that can help you better understand more complex abstractions, like Free structures. If you want to get into deep categorical explanation on this topic I highly recommend Bartosz Milewski’s “Understanding Yoneda”. In this article we will do something different — starting from practical use cases we will try to understand what is (Co)Yoneda, how it can be useful and how it can be implemented in scala.

Variable environment is the time-honored way of making sense of free variables, used in programming language theory as well when writing interpreters and some compilers. Algebraic effects give another way, as was pointed already at HOPE 2017. Although a theoretical curiosity, it may have surprising practical benefits: a new way of writing compilers, with the incremental type-checking, with easy variable usage, leaf function analyses. This work-in-progress report prototypes and illustrates the idea.

In the environment semantics the meaning of an expression is a function from the environment, which is opaque and cannot be examined. We cannot tell which variables in the environment have actually been used, and how many times. Algebraic effects make the denotation more observable: a handler can watch questions and find out which variables have been asked about, and how many times. Thus we obtain the variable usage analysis in the ordinary course of compilation, almost for free, so to speak.

It remains to be seen how this promise holds for a real compiler for a realistic programming language. I intend to find out by trying this technique out in the new installment of the compiler class (which is underway).

Compilers is a practical course. Its goal is to build a real compiler, which compiles a high-level language down to the actual x86-64 machine code and produces an executable that runs on student's laptops. The source language is Tiger': a procedural language in the spirit of Pascal — or C with arbitrarily nested functions. The compiler itself is to be developed in OCaml.

The characteristic of the course is an iterative, incremental development: we start with the most trivial source language, develop the full compiler for it, and then keep extending the source language and the compiler in small steps, reusing the earlier work as much as possible. At each iteration, we build the complete, end-to-end compiler producing runnable and testable executables, for a (progressively larger) subset of the source language.

Another characteristic is the extensive use of tagless-final style, taking the full advantage of extensibility afforded by it. Extensibility here means reuse — of type-checked and compiled artifacts from the previous increment — rather than copy-paste. The compiler is hence structured as a stack of domain-specific languages, with parsing at the bottom and assembly at the top. The languages are extended by adding new operations here and there (and only occasionally by redirection).

We cover all standard material for the compiler course, from parsing and type-checking to analyses, optimizations, calling conventions and assembly generation — but in a quite non-traditional fashion.

The MLC LLM iOS app can be installed in two ways: through the pre-built package or by building from the source. If you are an iOS user looking to try out the models, the pre-built package is recommended. If you are a developer seeking to integrate new features into the package, building the iOS package from the source is required.

2023

December

Boost your Metal app’s performance by upscaling lower-resolution content to save GPU time.

The MetalFX framework integrates with Metal to upscale a relatively low-resolution image to a higher output resolution in less time than it takes to render directly to the output resolution.

Use the GPU time savings to further enhance your app or game’s experience. For example, add more effects or scene details.

MetalFX gives you two different ways to upscale your input renderings:

  • Temporal antialiased upscaling
  • Spatial upscaling

If you can provide pixel color, depth, and motion information, add an MTLFXTemporalScaler instance to your render pipeline. Otherwise, add an MTLFXSpatialScaler instance, which only requires a pixel color input texture.

Because the scaling effects take time to initialize, make an instance of either effect at launch or when a display changes resolutions. Once you’ve created an effect instance, you can use it repeatedly, typically once per frame.

The MPSKernel is the base class for all Metal Performance Shaders kernels. It defines the baseline behavior for all kernels, declaring the device to run the kernel on, some debugging options, and a user-friendly label, should one be required. Derived from this class are the MPSUnaryImageKernel and MPSBinaryImageKernel subclasses, which define shared behavior for most image processing kernels (filters) such as edging modes, clipping, and tiling support for image operations that consume one or two source textures. Neither these nor the MPSKernel class are meant to be used directly. They just provide API abstraction and in some cases may allow some level of polymorphic manipulation of image kernel objects.

The thread object's dictionary. You can use the returned dictionary to store thread-specific data. The thread dictionary is not used during any manipulations of the NSThread object—it is simply a place where you can store any interesting data. For example, Foundation uses it to store the thread’s default NSConnection and NSAssertionHandler instances. You may define your own keys for the dictionary.

Observe audio session notifications to ensure that your app responds appropriately to interruptions.

Interruptions are a common part of the iOS and watchOS user experiences. For example, consider the scenario of receiving a phone call while you’re watching a movie in the TV app on your iPhone. In this case, the movie’s audio fades out, playback pauses, and the sound of the call’s ringtone fades in. If you decline the call, control returns to the TV app, and playback begins again as the movie’s audio fades in. At the center of this behavior is your app’s audio session. As interruptions begin and end, the audio session notifies any registered observers so they can take appropriate action. For example, AVPlayer monitors your app’s audio session and automatically pauses playback in response to interruption events. You can monitor these changes by key-value observing the player’s timeControlStatus property, and update your user interface as necessary when the player pauses and resumes playback.

Make it easy for people to start activities from your app’s UI, from the system share sheet, or using AirPlay over AirDrop. After you define one or more SharePlay activities for your app, make them easy for people to discover in your UI. Include buttons, menus items, and other elements to start activities, present activities in system interfaces like the share sheet, and update your activities to take advantage of other system behaviors.

Starting an activity requires an active FaceTime call or Messages conversation. When a conversation is active, you can start an activity right away from your UI. If no conversation is active, the Group Activities framework facilitates starting a conversation as part of starting your activity. Some system features also help you start conversations. For guidance about the best ways to add SharePlay support to your app’s UI, see Human Interface Guidelines > SharePlay.

Enhance the appearance of objects in a RealityKit scene with Physically Based Rendering (PBR).

A Material instance describes the surface properties of an entity and controls how RealityKit renders that entity. A PhysicallyBasedMaterial is a type of material that closely approximates the way light bounces off objects in the real world. It creates very realistic rendered objects that look natural when placed into an AR scene. When you import models from USDZ files, RealityKit automatically creates one or more PhysicallyBasedMaterial instances from the PBR material settings in the file. You can also create PBR materials manually, either to change the appearance of an entity loaded from a USDZ file at runtime, or to use PBR rendering with procedurally created entities.

Implementing Special Rendering Effects with RealityKit Postprocessing

In iOS 15 and later, and macOS 12 and later, you can modify RealityKit’s rendered frame buffer before your app displays it by registering a callback function. This sample demonstrates how to create a variety of different postprocess effects for a Realitykit scene using four different technologies:

  • Metal kernel functions
  • Metal Performance Shaders
  • Core Image
  • SpriteKit rendering

It also demonstrates how to combine multiple postprocess technologies by using both Metal kernel functions and Core Image filters at the same time. The generated app displays a Reality Composer scene in AR and lets you select different postprocessing effects from a list.

A fully native Spline Metal renderer to help you bring 3D to iOS, iPadOS, macOS and visionOS.

Update your SharePlay activities to support Spatial Personas and the shared context when running in visionOS.

A person who participates in SharePlay activities on Apple Vision Pro has the option to participate using their Spatial Persona. The system arranges Spatial Personas around the activity content, giving each person a clear view of the content and each other. Each person sees the facial expressions of other participants, what they’re looking at, and where they’re pointing. This experience creates the feeling that they’re in the same physical space interacting with shared content and each other.

To maintain the experience when Spatial Personas are visible, apps share additional information to maintain the shared context for the activity. Because participants can see where others are looking, your app’s content must look the same for everyone. Share any additional information you need to keep everyone’s content in sync visually. For example, synchronize your window’s scroll position to ensure everyone sees the same portion of that window.

You don’t need to define new GroupActivity types specifically to support Spatial Personas. The system automatically displays Spatial Personas for existing activities that take place in a window or volume. However, if you support activities in a Full Space, you need to do additional work to support Spatial Personas for your experience. For information about how to define activities in your app, see Defining your app’s SharePlay activities.

Create synchronized media experiences that enable users to watch and listen across devices.

Watching TV and movies, and listening to music, can be more fun when you do it with friends and family. However, getting together in person isn’t always an option. Beginning with iOS 15, tvOS 15, and macOS 12, you have the ability to create media apps that let people watch and listen together wherever they are. This capability is possible using AVFoundation and the new GroupActivities frameworks.

AVFoundation introduces a new class, AVPlayerPlaybackCoordinator, that synchronizes the timing of AVPlayer objects across devices. Apps use the GroupActivities framework to connect playback coordinators using a GroupSession object.

Crash reports and addresses can be scary, so before going any further, let’s take a step back to frame what we are doing. Part 1 showed that Apple’s crash reports have symbols for their frameworks. We also know that a symbol’s address on disk is the same across a device and OS pairing. We want to get all the possible crashes (for a device x OS pair) from Apple and then map the memory address to the symbol name. To do this, we need to reliably calculate a memory address, allowing us to symbolicate crash reports.

Think of this part like we’re doing Algebra. We know the equations for how addresses are calculated. Now, we need to solve for our variables.

First, let’s find the linker address, which is defined at the time of compilation and can be found within the binary, making it easy to get. We'll need:

>     ```
>     brew install blacktop/tap/ipsw
>     ```
> * DyldExtractor to extract the ipsw shared cache
>     ```
>     python3 -m pip install dyldextractor
>     ```
> * `otool`
>
> After installing ipsw, our first step is to extract the shared cache:
> ```
> ipsw extract --dyld PATH_TO_IPSW
> ```
> Now, we can extract the specific framework binary using DyldExtractor:
> ```
> dyldex -e /System/Library/Frameworks/YOUR_FRAMEWORK.framework/YOUR_FRAMEWORK ./PATH_TO_EXTRACTED_IPSW/dyld_shared_cache_arm64e
> ```
>
> This process isolates our framework binary from the shared cache. The next step involves using otool to determine the linker address. For this, we inspect the load commands and specifically look for the segname __TEXT field in the output.
> ```
> otool -l binaries/System/Library/Frameworks/SwiftUI.framework/SwiftUI | grep LC_SEGMENT -A8
> ```

The official Sourcegraph/Cody plugin for Neovim

In Swift, property observers such as willSet and didSet are not called when a property is set in an initializer. This is by design, as the initializer's purpose is to set up the initial state of an object, and during this phase, the object is not yet fully initialized. However, if we need to perform some actions similar to what we'd do in property observers during initialization, there are some workarounds.

Did you know you can connect the Xcode debugger to a running process? You can also have Xcode wait for a process to launch before connecting. This helps when debugging issues when your App is launched in response to an external event such as a notification. We can also use it to peek at some of the built-in Apps in the simulator.

The steps to connect the debugger to a running process:

  1. Build and run your App to install it on the simulator or device of your choosing. I find this works fine even with devices connected via WiFi.
  2. Stop the Xcode debug session. If you want to set some breakpoints for when the App is launched do that now. For example, to inspect the launch options you could set a breakpoint on didFinishLaunchingWithOptions in the App delegate.
  3. You can either launch the App and then attach the debugger or attach and then launch. Either way use the Xcode Debug menu to attach the debugger to a process.
  4. If the App is not yet running you will need to attach to it by name. For example, here I am going to attach to the AdaptType project. If the process is not running the Xcode debugger will wait for it to start. This is useful if you want the App to launch in response to an external event such as a notification.
  5. If the App is already running you can connect to it directly either by name or finding it in the list of running processes. Make sure you have the device or simulator selected as the target of your Xcode project then use the “Attach to Process” option in the Debug menu. Xcode suggests the most likely process based on your current Xcode project and destination or you can find it in the list of running processes.
  6. Once attached you can debug as usual. The debugger will stop if it hits a breakpoint or you can use the view debugger to inspect the view hierarchy.

Have a crash report coming that won't fully symbolicate? This symbolicator contains symbols for SwiftUI and other private frameworks. Upload or paste your crash data below to symbolicate all addresses.

📋 A full list of supported symbols can be found at our open-source ETSymbolication repo.

📚 Read our posts exploring how we uncovered hidden symbols + built an open-source symbolication reference.

🍎 Check out the open source repo to learn how to contribute more symbols to this tool

The Sixth International Conference on Applied Category Theory took place at the University of Maryland on 31 July - 4 August 2023, following the previous meetings at Leiden (2018), Oxford (2019), MIT (2020, fully online), Cambridge (2021) and Strathclyde (2022). It was preceded by the Adjoint School 2023 (24 - 28 July), a collaborative research event in which junior researchers worked under the mentorship of experts. The conference comprised 59 contributed talks, a poster session, an industry showcase session, four tutorial sessions and a session where junior researchers who had attended the Adjoint School presented the results of their research at the school. Information regarding the conference may be found at https://act2023.github.io/.

A CLI for extracting libraries from Apple's dyld shared cache file

As of macOS Big Sur, instead of shipping the system libraries with macOS, Apple ships a generated cache of all built in dynamic libraries and excludes the originals. This tool allows you to extract these libraries from the cache for reverse engineering.

Control whether a view exists, and how that affects the overall layout.

If your design has views that aren’t always relevant, you have a choice about how their absence affects the overall layout. You can lay out all the other content as if the view doesn’t exist, then update the position of the other content when the view becomes visible. Or, you can reserve space for the view regardless of whether it’s visible, so that when it becomes visible, none of the other content needs to move to accommodate it.

Let's get started! If you have any trouble with these instructions or have questions after getting going, we're here to help:

  • The #cloud channel on Discord is a place to follow Unison Cloud news and updates, get help, and find tips and tricks.
  • The chat widget in the corner of unison.cloud is monitored by the Unison Cloud team and is another place you can get help.
  • For general Unison language questions and announcements, visit the Unison Slack. Bug reports, feature requests, ideas, and feedback are most welcome on any of these channels. The Discord and Slack are also nice places to meet other folks using Unison + Unison Cloud. It's a friendly and supportive group!

Here’s how to turn off “automated content recognition,” the Shazam-like software on smart TVs that tracks what you’re watching.

A type that represents a globally-unique actor that can be used to isolate various declarations anywhere in the program.

A type that conforms to the GlobalActor protocol and is marked with the @globalActor attribute can be used as a custom attribute. Such types are called global actor types, and can be applied to any declaration to specify that such types are isolated to that global actor type. When using such a declaration from another actor (or from nonisolated code), synchronization is performed through the shared actor instance to ensure mutually-exclusive access to the declaration.

Reality Kit for making AR apps on iOS.

In this article, you can learn how to use RealityKit from basic to advanced.

Please use it as an introduction and an index.

Cover the 1000s of edge cases of your application - in 5 minutes of setup, with zero maintenance.

Alerts are an extension of the blockquote syntax that you can use to emphasize critical information. On GitHub, they are displayed with distinctive colors and icons to indicate the importance of the content.

> [!NOTE]
> Highlights information that users should take into account, even when skimming.

> [!TIP]
> Optional information to help a user be more successful.
> 
> [!IMPORTANT]
> Crucial information necessary for users to succeed.
> 
> [!WARNING]
> Critical content demanding immediate user attention due to potential risks.
> 
> [!CAUTION]
> Negative potential consequences of an action.

[!NOTE] Highlights information that users should take into account, even when skimming.

Tip

Optional information to help a user be more successful.

Important

Crucial information necessary for users to succeed.

Warning

Critical content demanding immediate user attention due to potential risks.

Caution

Negative potential consequences of an action.

This cheat sheet is automatically generated from GitHub Emoji API and Unicode Full Emoji List.

The era of spatial computing is here, where digital content blends seamlessly with your physical space. So you can do the things you love in ways never before possible. This is Apple Vision Pro.

mootool is an attempt at an open source replacement to the legandary jtool2 allowing it to continue to progress with the Apple research community. Ruby was selected as Homebrew maintains a good Mach-O parser that is pure (meaning it needs no dependencies other then a Ruby runtime).

As a secondary goal every command should provide output both in human readable as well as machien readable (YAML) format making it suitable for use in scripting.

A dynamic property type that allows access to a namespace defined by the persistent identity of the object containing the property (e.g. a view).

Third-party software development kits (SDKs) can provide great functionality for apps; they can also have the potential to impact user privacy in ways that aren’t obvious to developers and users. As a reminder, when you use a third-party SDK with your app, you are responsible for all the code the SDK includes in your app, and need to be aware of its data collection and use practices. At WWDC23, we introduced new privacy manifests and signatures for SDKs to help bring more awareness for how third-party SDKs use data. This functionality is a step forward for all apps, and we encourage all SDKs to adopt it to better support the apps that depend on them.

Privacy Manifests

Privacy manifest files outline the privacy practices of the third-party code in an app, in a single standard format. When you prepare to distribute your app, Xcode will combine the privacy manifests across all the third-party SDKs used by your app into a single, easy-to-use report. With one comprehensive report that summarizes all the third-party SDKs found in an app, it will be even easier for you to create more accurate Privacy Nutrition Labels.

Signatures for SDKs

Now with signatures for SDKs, when you adopt a new version of a third-party SDK in your app, Xcode will validate that it was signed by the same developer, improving the integrity of your software supply chain.

SDKs that require a privacy manifest and signature

The following are commonly used SDKs in apps on the App Store. Starting in spring 2024, you must include the privacy manifest for any SDK listed below when you submit new apps in App Store Connect that include those SDKs, or when you submit an app update that adds one of the listed SDKs as part of the update. Signatures are also required in these cases where the listed SDKs are used as binary dependencies. Any version of a listed SDK, as well as any SDKs that repackage those on the list, are included in the requirement.

Plain Swift is a simple development environment that allows you to develop executables and dynamic libraries using the Swift programming language. It supports syntax highlighting, error highlighting, inline code suggestions, and archiving the compiled product for distribution along with the necessary runtime libraries.

This paper proposes a new type system for concurrent programs, allowing threads to exchange complex object graphs without risking destructive data races. While this goal is shared by a rich history of past work, existing solutions either rely on strictly enforced heap invariants that prohibit natural programming patterns or demand pervasive annotations even for simple programming tasks. As a result, past systems cannot express intuitively simple code without unnatural rewrites or substantial annotation burdens. Our work avoids these pitfalls through a novel type system that provides sound reasoning about separation in the heap while remaining flexible enough to support a wide range of desirable heap manipulations. This new sweet spot is attained by enforcing a heap domination invariant similarly to prior work, but tempering it by allowing complex exceptions that add little annotation burden. Our results include: (1) code examples showing that common data structure manipulations which are difficult or impossible to express in prior work are natural and direct in our system, (2) a formal proof of correctness demonstrating that well-typed programs cannot encounter destructive data races at run time, and (3) an efficient type checker implemented in Gallina and OCaml.

Answers common recruiter & interview questions.

Place content based on the current position of a known image in a person’s surroundings.

Use ARKit’s support for tracking 2D images to place 3D content in a space. ARKit provides updates to the image’s location as it moves relative to the person. If you supply one or more reference images in your app’s asset catalog, people can use a real-world copy of that image to place virtual 3D content in your app. For example, if you design a pack of custom playing cards and provide those assets to people in the form of a real-world deck of playing cards, they can place unique content per card in a fully immersive experience.

Creating 3D objects from photographs

In iOS 17 and later, and macOS 12 and later, to create a 3D object from a series of photographs, submit the images to RealityKit using a PhotogrammetrySession, register to receive status updates, and start the session. The completed process produces a 3D representation of the photographed object that you can use in your app or export to other software like Reality Composer.

For more information on capturing high-quality images for photogrammetry, see Capturing photographs for RealityKit Object Capture.

Deprecated! The functionality of joker is now built-in to Jtool2 when used with --analyze on any kernelcache

Joker is a quick and dirty iOS kernelcache handling utility I've written to assist in my reverse engineering. Apple tries their damn hardest to make reversing the kernel as hard as possible: With every release, more symbols are stripped. The kernelcache, being prelinked, requires less symbols to begin with (and tables in memory, as all LINKEDIT segments, are jettisoned). And - let's not forget - the kernelcache is encrypted. 32-bit kernelcaches can be decrypted thanks to the holy work by @xerub and others, but no 64-bit kernelcache keys exist (publicly), and the only way to "see" the kernel is by dumping it.

Applies effects to this view, while providing access to layout information through a 3D geometry proxy.

Applies effects to this view, while providing access to layout information through a geometry proxy.

A container view that defines its content as a function of its own size and coordinate space.

This view returns a flexible preferred size to its own container view.

This container differs from GeometryReader in that it also reads available depth, and thus also returns a flexible preferred depth to its parent layout. Use the 3D version only in situations where you need to read depth, because it affects depth layout when used in a container like a ZStack.

Detect horizontal surfaces like tables and floors, as well as vertical planes like walls and doors.

Flat surfaces are an ideal place to position content in an app that uses a Full Space in visionOS. They provide a place for virtual 3D content to live alongside a person’s surroundings. Use plane detection in ARKit to detect these kinds of surfaces and filter the available planes based on criteria your app might need, such as the size of the plane, its proximity to someone, or a required plane orientation.

Marbla is an experimental display typeface exploring the possibilities of variable fonts to change the mood and personality of a typeface. Starting with a friendly regular style the letterforms can be modified via the axes Inktrap, Balloon and Curve. The result is a variable font with a range of expressive and playful display styles that can be combined with the legible Regular. The combination of the three axes creates countless possibilities of variation.

We will dive into the dark depths of the Objective-C runtime to perform acts forbidden in Swift. So, don’t be scared - the Dynamic Funtime™️ of Objective-C allows us to use an old friend, NSInvocation, which isn’t available to Swift.

  1. Find API To Invoke
  2. Import a Bundle
  3. Create a Class Type
  4. NSInvocation
  5. Run on your Device

A high-level representation of a collection of vertices and edges that define a shape.

Creates a new box with rounded corners mesh with the specified extent.

Swift Macros are a powerful new feature introduced in Swift 5.9 that allows developers to generate code at compile time. They are a great way to reduce boilerplate code and help scale your codebase by leveraging the power of metaprogramming.

Due to the way they are implemented and their tight coupling with SPM, Swift macros are usually defined in Swift packages and, as such, they are usually imported into Xcode projects such as iOS apps or frameworks as SPM dependencies.

While this is fine in most cases, there are certain situations where you might not want or be able to import the macro as a Swift package dependency. For example, you might want to use a macro in a CocoaPods library or obfuscate its source code.

In these cases, and as I will show you in this article, you might want to import your macro into your Xcode project as a binary instead and not as an SPM dependency.

Learn how to play animation on 3D models using RealityKit.

When we design augmented reality experiences, a crucial step involves specifying animated behaviors for objects. This will add layers of interactivity, transforming static elements into dynamic components within the augmented environment. In some cases, models may already contain animations, however, we can still define specific behaviors using RealityKit.

November

This article explains how to interface between NIO and Swift Concurrency.

NIO was created before native Concurrency support in Swift existed, hence, NIO had to solve a few problems that have solutions in the language today. Since the introduction of Swift Concurrency, NIO has added numerous features to make the interop between NIO’s Channel eventing system and Swift’s Concurrency primitives as easy as possible.

A visionOS app icon is circular and includes a background layer and one or two layers on top, producing a three-dimensional object that subtly expands when people view it.

Add app icon variations to represent your app in places such as Settings, search results, and the App Store.

Every app has a distinct app icon that communicates the app’s purpose and makes it easy to recognize throughout the system. Apps require multiple variations of the app icon to look great in different contexts. Xcode can help generate these variations for you using a single high-resolution image, or you can configure your app icon variations by using an app icon’s image set in your project’s asset catalog. visionOS and tvOS app icons are made up of multiple stacked image layers you configure in your project’s asset catalog.

Module aliases for targets in this dependency. The key is an original target name and the value is a new unique name mapped to the name of the .swiftmodule binary.

Step 1: Diff driver

Define a diff driver in your $HOME/.gitconfig. The xfuncname configuration specifies a regular expression that is used to match a line you want to see in the hunk header after the @@ bit. Covering all possible options with a regexp probably isn't possible, but this should cover most of the cases:

[diff "swift"]
xfuncname = ^[ \t]*(((private |public |internal |final |open )*class|(private |public |internal )*struct|(private |public |internal )*actor|(private |public |internal )*func|(private |public |internal )*extension|(private |public |internal )_enum)[ \t]._)$

Step 2: Global git attributes

If you don't have a global git attributes file configured, set one up:

git config --global core.attributesfile ~/.gitattributes

Step 3: Configure the swift driver for Swift files

Edit the ~/.gitattributes file to make Git use your newly defined diff driver for Swift files. Add the following line:

*.swift diff=swift

Wine Supercharged... with the power of Apple's Game Porting Toolkit.

The testing library provides much of the same functionality of XCTest, but uses its own syntax to declare test functions and types. This document covers the process of converting XCTest-based content to use the testing library instead.

I've been wondering if it would be possible to wrap observation into an AsyncStream. That would let me use an asynchronous for loop to iterate over changes. In this post I will share how I implemented it.

A type you use to coordinate your interface’s behavior when an active SharePlay session supports spatial placement of content.

A SystemCoordinator object helps you coordinate the presentation of your app’s content when spatial placement is active. In visionOS, the system can present a SharePlay activity as if the participants were together in the same room with the content. Each participant views the content from a particular vantage point, and sees the changes that others make. The system handles the placement of each participant’s Spatial Persona relative to the content, but you handle any changes to the content itself with the help of the SystemCoordinator object.

You don’t create a SystemCoordinator object directly. After you receive a GroupSession object for an activity, retrieve the system coordinator from the session’s systemCoordinator property. When you first retrieve the object, update its configuration property to tell the system how you want to arrange participants in the scene. After that, use the information in the system coordinator’s properties to keep your app’s interface up to date. When participants support spatial placement, send additional data to synchronize your content for those participants. For example, when one person scrolls the contents of a window, update the scroll position in the window of other spatially aware participants to preserve the shared context for everyone.

You choose what information to share among participants, and you choose how to manage the corresponding updates. A system coordinator object only helps you know when to make those changes. Observe the object’s published properties to receive automatic updates when the values change.

People expect most visionOS apps to support SharePlay. While wearing Apple Vision Pro, people choose the Spatial option in FaceTime to share content and activities with others.

In a shared activity, FaceTime can show representations of other participants — called Spatial Personas — within each wearer’s space, making everyone feel like they’re sharing the same experience in the same place. During a shared experience in FaceTime, people can interact with each other in natural ways through their Spatial Personas. For example, people can speak or gesture directly to others, tell when someone is paying attention to them, and know which person is using a shared tool or resource.

visionOS uses the concept of shared context to describe the characteristics of a shared activity that help people feel physically present with others while connecting over the same content. A shared context helps give people confidence that they’re experiencing the same thing as everyone else.

When people feel that they’re truly sharing an experience, social dynamics can encourage authentic, intuitive interactions. For example, people can communicate verbally and nonverbally to make plans, take turns, and share resources.

A structure that specifies the preferred arrangement of participant Spatial Personas in a shared simulation space.

Multipeer Connectivity is a high-level interface to Apple’s peer-to-peer Wi-Fi support. It includes:

  • A very opinionated networking model, where every participant in a session is a symmetric peer
  • User interface components for advertising and joining a session

Use it when your requirements are aligned with those features. Don’t use it if your program uses a client/server architecture; Network framework works better in that case. For an example, see Building a custom peer-to-peer protocol.

Important: A common misconception is that Multipeer Connectivity is the only way to use peer-to-peer Wi-Fi. > That’s not the case. Network framework has opt-in peer-to-peer Wi-Fi support. For the details, see Peer-to-Peer networking.

Foundation also has peer-to-peer Wi-Fi support:

These APIs were marked as to-be-deprecated in 2021 (see Versions). If you have existing code that uses them, make a plan to migrate to Network framework.

The dnssd API supports peer-to-peer Wi-Fi but with an important caveat: If you advertise a service on peer-to-peer Wi-Fi using dnssd, the service’s listener must be run by a peer-to-peer aware API, like NWListener or NSNetService. Given that those APIs already have a facility to opt in to peer-to-peer Wi-Fi, there’s very little point using dnssd for this.

Apple platforms have a wide range of networking APIs, spanning many different frameworks:

With all that choice, it’s hard to know where to start. This technote aims to clarify that. It makes specific recommendations as to which API to use for a given network protocol. It then discusses Alternative APIs and some Best practices.

The focus here is on APIs that allow you to use the networking stack. If you want to extend the networking stack—for example, to add support for a custom VPN protocol—implement a Network Extension provider. For the details, see Network Extension.

In this post, we’ll take a look at how to customize the macOS menu bar for a SwiftUI app, using SwiftUI tools like CommandMenu and CommandGroup.

Businesses of all kinds and sizes are exploring the possibilities of the infinite canvas of Apple Vision Pro — and realizing ideas that were never before possible. We caught up with two of those companies — JigSpace and PTC — to find out how they’re approaching the new world of visionOS.

A content configuration suitable for hosting a hierarchy of SwiftUI views.

A safe way to synchronously assume that the current execution context belongs to the MainActor. This API should only be used as last resort, when it is not possible to express the current execution context definitely belongs to the main actor in other ways. E.g. one may need to use this in a delegate style API, where a synchronous method is guaranteed to be called by the main actor, however it is not possible to annotate this legacy API with @MainActor.

Warning: If the current executor is not the MainActor’s serial executor, this function will crash.

Note that this check is performed against the MainActor’s serial executor, meaning that if another actor uses the same serial executor–by using sharedUnownedExecutor as its own unownedExecutor–this check will succeed, as from a concurrency safety perspective, the serial executor guarantees mutual exclusion of those two actors.

Expert in SwiftUI, SwiftData, and Observation framework

Expert in Elixir and Phoenix, uses official docs and forums for answers.

AsyncGraphics is a Swift package for working with images and video with async / await. The core type is simply just called Graphic, it's like an image and is backed by a MTLTexture.

A mechanism to interface between synchronous code and an asynchronous stream.

The closure you provide to the AsyncStream in init(_:bufferingPolicy:\_:) receives an instance of this type when invoked. Use this continuation to provide elements to the stream by calling one of the yield methods, then terminate the stream normally by calling the finish() method.

Note: Unlike other continuations in Swift, AsyncStream.Continuation supports escaping.

To create a custom gesture, you rely on ARKit for information like hand positioning and joint orientation. Before you can offer custom gestures in your app, your app must be running in a Full Space and you must request people’s permission to access information about their hands. For developer guidance, see Setting up access to ARKit data.

Prioritize comfort in custom gestures. Continually test the ergonomics of all interactions that require custom gestures. A custom interaction that requires people to keep their arms raised for even a little while can be physically tiring, and repeating very similar movements many times in succession can stress people’s muscles and joints.

Consider carefully before defining custom gestures that involve multiple fingers or both hands. It can be challenging for people to perform custom gestures, and requiring them to position multiple fingers or use both hands at the same time can be even more difficult.

Avoid creating a custom gesture that requires people to use a specific hand. Expecting people to remember which hand to use for a custom gesture increases their cognitive load while also making your experience less welcoming to people with strong hand-dominance or limb differences.

If you decide to create a custom gesture, make sure it’s:

  • Inclusive. Gestures can mean different things to different people, so be sure your custom gestures don’t send messages you don’t intend.

  • Comfortable. Great custom gestures are physically easy for people to perform, especially over time.

  • Distinctive. Custom gestures that harmonize with your app or game can be easier for people to discover and remember, while enhancing their enjoyment of the experience.

  • Easy to describe. If you can’t use simple language and simple graphics to describe your custom gesture, it may mean that the gesture will be difficult for people to learn and perform.

A source of live data about the position of a person’s hands and hand joints.

SwiftUI for iOS 17 and macOS Sonoma come with a fantastic new superpower: the ability to transform any SwiftUI view with Metal shaders, all hardware accelerated so complex effects run at lightning fast speeds even on old devices.

I want to help folks get started with Metal, so I've produced two free resources that will help everyone:

Inferno is a project that makes Metal shaders easy for everyone to use, but I've also gone a step further and added comprehensive documentation explaining exactly how each shader works so that others can learn too.

In this article, I will describe step by step how to configure Neovim to move away from Xcode. It took me several months to figure it all out piece by piece and to combine it into one working iOS development environment (I did it so you don’t have to :D). Hopefully, it won’t take you more than half a day to configure it with my help :).

It will be a little bit lengthy trip, and it will require setting up multiple plugins. I would recommend it to people who are already familiar with Vim. If you just installed Neovim, it could be overwhelming to learn Vim motions, Neovim environment, and set up dependencies, all at once.

If you are just starting with Neovim, take it slowly. First, learn Vim motions inside Xcode (by enabling Vim mode), in the meantime start configuring Neovim and get familiar with it by installing plugins, editing text files, JSON files, etc. Once you feel comfortable with Vim motions and Neovim, then try migrating your development :).

pkgx is a blazingly fast, standalone, cross‐platform binary that runs anything

The elements produced by the publisher, as an asynchronous sequence.

This property provides an AsyncPublisher, which allows you to use the Swift async-await syntax to receive the publisher’s elements. Because AsyncPublisher conforms to AsyncSequence, you iterate over its elements with a for-await-in loop, rather than attaching a subscriber.

The following example shows how to use the values property to receive elements asynchronously. The example adapts a code snippet from the filter(_:) operator’s documentation, which filters a sequence to only emit even integers. This example replaces the Subscribers.Sink subscriber with a for-await-in loop that iterates over the AsyncPublisher provided by the values property.

```swift

let numbers: [Int] = [1, 2, 3, 4, 5] let filtered = numbers.publisher .filter { $0 % 2 == 0 }

for await number in filtered.values {
    print("\(number)", terminator: " ")
}

* [AsyncPublisher](https://developer.apple.com/documentation/combine/asyncpublisher)
> _A publisher that exposes its elements as an asynchronous sequence._
>
> AsyncPublisher conforms to [AsyncSequence](https://developer.apple.com/documentation/Swift/AsyncSequence), which allows callers to receive values with the for-await-in syntax, rather than attaching a [Subscriber](https://developer.apple.com/documentation/combine/subscriber).
>
> Use the [values](https://developer.apple.com/documentation/combine/publisher/values-1dm9r) property of the [Publisher](https://developer.apple.com/documentation/combine/publisher) protocol to wrap an existing publisher with an instance of this type.
* [viewIsAppearing(\_:)](https://developer.apple.com/documentation/uikit/uiviewcontroller/4195485-viewisappearing)
> _Notifies the view controller that the system is adding the view controller’s view to a view hierarchy._
>
> The system calls this method once each time a view controller’s view appears after the [viewWillAppear(\_:)](https://developer.apple.com/documentation/uikit/uiviewcontroller/1621510-viewwillappear) call. In contrast to `viewWillAppear(_:)`, the system calls this method after it adds the view controller’s view to the view hierarchy, and the superview lays out the view controller’s view. By the time the system calls this method, both the view controller and its view have received updated trait collections and the view has accurate geometry.
>
> You can override this method to perform custom tasks associated with displaying the view. For example, you might use this method to configure or update views based on the trait collections of the view or view controller. Or, because computing a scroll position relies on the view’s size and geometry, you might programmatically scroll a collection or table view to ensure a selected cell is visible when the view appears.
>
> If you override this method, you need to call super at some point in your implementation.
* [Previews in Xcode](https://developer.apple.com/documentation/swiftui/previews-in-xcode)
> _Generate dynamic, interactive previews of your custom views._
>
> When you create a custom [View](https://developer.apple.com/documentation/swiftui/view) with SwiftUI, Xcode can display a preview of the view’s content that stays up-to-date as you make changes to the view’s code. You use one of the preview macros — like [Preview(\_:body:)](https://developer.apple.com/documentation/swiftui/preview(_:body:)) — to tell Xcode what to display. Xcode shows the preview in a canvas beside your code.
>
> Different preview macros enable different kinds of configuration. For example, you can add traits that affect the preview’s appearance using the [Preview(\_:traits:\_:body:)](https://developer.apple.com/documentation/swiftui/preview(_:traits:_:body:)) macro or add custom viewpoints for the preview using the [Preview(\_:traits:body:cameras:)](https://developer.apple.com/documentation/swiftui/preview(_:traits:body:cameras:)) macro. You can also check how your view behaves inside a specific scene type. For example, in visionOS you can use the [Preview(\_:immersionStyle:traits:body:)](https://developer.apple.com/documentation/swiftui/preview(_:immersionstyle:traits:body:)) macro to preview your view inside an [ImmersiveSpace](https://developer.apple.com/documentation/swiftui/immersivespace).
You typically rely on preview macros to create previews in your code. However, if you can’t get the behavior you need using a preview macro, you can use the [PreviewProvider](https://developer.apple.com/documentation/swiftui/previewprovider) protocol and its associated supporting types to define and configure a preview.
* [Immersive spaces](https://developer.apple.com/documentation/swiftui/immersive-spaces)
> _Display unbounded content in a person’s surroundings._
>
> Use an immersive space to present SwiftUI views outside of any containers. You can include any views in a space, although you typically use a [RealityView](https://developer.apple.com/documentation/RealityKit/RealityView) to present RealityKit content.
>
> You can request one of three styles of spaces with the [immersionStyle(selection:in:)](https://developer.apple.com/documentation/swiftui/scene/immersionstyle(selection:in:)) scene modifier:
> * The [mixed](https://developer.apple.com/documentation/swiftui/immersionstyle/mixed) style blends your content with passthrough. This enables you to place virtual objects in a person’s surroundings.
> * The [full](https://developer.apple.com/documentation/swiftui/immersionstyle/full) style displays only your content, with passthrough turned off. This enables you to completely control the visual experience, like when you want to transport people to a new world.
> * The [progressive](https://developer.apple.com/documentation/swiftui/immersionstyle/progressive) style completely replaces passthrough in a portion of the display. You might use this style to keep people grounded in the real world while displaying a view into another world.
>
> When you open an immersive space, the system continues to display all of your app’s windows, but hides windows from other apps. The system supports displaying only one space at a time across all apps, so your app can only open a space if one isn’t already open.
* [SpatialEventGesture](https://developer.apple.com/documentation/swiftui/spatialeventgesture)
> _A gesture that provides information about ongoing spatial events like clicks and touches._
>
> Use a gesture of this type to track multiple simultaneous spatial events and gain access to detailed information about each. For example, you can place a particle emitter at every location in a [Canvas](https://developer.apple.com/documentation/swiftui/canvas) that has an ongoing spatial event:

   ```swift
   struct ParticlePlayground: View {
   @State var model = ParticlesModel()


   var body: some View {
       Canvas { context, size in
           for particle in model.particles {
               context.fill(Path(ellipseIn: particle.frame),
                            with: .color(particle.color))
           }
       }
       .gesture(
           SpatialEventGesture()
               .onChanged { events in
                   for event in events {
                       if event.phase == .active {
                           // Update particle emitters.
                           model.emitters[event.id] = ParticlesModel.Emitter(
                               location: event.location
                           )
                       } else {
                           // Remove emitters when no longer active.
                           model.emitters[event.id] = nil
                       }
                   }
               }
               .onEnded { events in
                   for event in events {
                       // Remove emitters when no longer active.
                       model.emitters[event.id] = nil
                   }
               }
           )
       }
   }
   ```

   > The gesture provides a [SpatialEventCollection](https://developer.apple.com/documentation/swiftui/spatialeventcollection) structure when it detects changes. The collection contains [SpatialEventCollection.Event](https://developer.apple.com/documentation/swiftui/spatialeventcollection/event) values that represent ongoing spatial events. Each event contains a stable, unique identifier so that you can track how the event changes over time. The event also indicates its current location, a timestamp, the pose of the input device that creates it, and other useful information.
   >
   > The phase of events in the collection can change to [SpatialEventCollection.Event.Phase.ended](https://developer.apple.com/documentation/swiftui/spatialeventcollection/event/phase-swift.enum/ended) or [SpatialEventCollection.Event.Phase.cancelled](https://developer.apple.com/documentation/swiftui/spatialeventcollection/event/phase-swift.enum/cancelled) while the gesture itself remains active. Individually track state for each event inside [onChanged(\_:)](https://developer.apple.com/documentation/swiftui/gesture/onchanged(_:)) or [updating(\_:body:)](https://developer.apple.com/documentation/swiftui/gesture/updating(_:body:)) and clean up all state in [onEnded(\_:)](https://developer.apple.com/documentation/swiftui/gesture/onended(_:)).
* [Tim — Your iOS Mobile Dev Interview Coach](https://chat.openai.com/g/g-Bq6IZuAxd-tim-your-ios-mobile-dev-interview-coach)
> I help new and seasoned iOS developers do well on their take-home technical assessment, initial phone screen and on-site interviews. I can quiz you on system design whiteboarding, app architecture and common data structures & algorithms problems, based in Swift.
* [ChatPRD](https://chat.openai.com/g/g-G5diVh12v-chatprd)
> An on-demand Chief Product Officer that drafts and improves your PRDs, while coaching you to become an elite product manager.
* [XcodeGPT](https://chat.openai.com/g/g-z9dLWHPID-xcodegpt)
> Your Xcode assistant with a new logo
* [NSBundleResourceRequest](https://developer.apple.com/documentation/foundation/nsbundleresourcerequest)
> _A resource manager you use to download content hosted on the App Store at the time your app needs it._
> You identify on-demand resources during development by creating string identifiers known as tags and assigning one or more tags to each resource. An `NSBundleResourceRequest` object manages the resources marked by one or more tags.
>
> You use the resource request to inform the system when the managed tags are needed and when you have finished accessing them. The resource request manages the downloading of any resources marked with the managed tags that are not already on the device and informs your app when the resources are ready for use.
>
> The system will not attempt to purge the resources marked with a tag from on-device storage as long as at least one NSBundleResourceRequest object is managing the tag. Apps can access resources after the completion handler of either [beginAccessingResources(completionHandler:)](https://developer.apple.com/documentation/foundation/nsbundleresourcerequest/1614840-beginaccessingresources) or [conditionallyBeginAccessingResources(completionHandler:)](https://developer.apple.com/documentation/foundation/nsbundleresourcerequest/1614834-conditionallybeginaccessingresou) is called successfully. Management ends after a call to [endAccessingResources()](https://developer.apple.com/documentation/foundation/nsbundleresourcerequest/1614843-endaccessingresources) or after the resource request object is deallocated.
>
> Other properties and methods let you track the progress of a download, change the priority of a download, and check whether the resources marked by a set of tags are already on the device. Methods in [Bundle](https://developer.apple.com/documentation/foundation/bundle) indicate to the system the relative importance of preserving a tag in memory after it is no longer in use. For more information, see [setPreservationPriority(\_:forTags:)](https://developer.apple.com/documentation/foundation/bundle/1614845-setpreservationpriority) and [preservationPriority(forTag:)](https://developer.apple.com/documentation/foundation/bundle/1614839-preservationpriority).
* [XcodeGPT](https://tuist.io/blog/2023/11/10/gpts)
> XcodeGPT is your go-to assistant for any queries about Xcode, its build systems, and project management. This model is not only trained with publicly available Xcode data but also infused with the nuanced understanding we have developed over the years. We are committed to continually updating XcodeGPT with our latest insights, ensuring it remains an invaluable resource for the most current Xcode information.
>
> Experience XcodeGPT in action [here](https://chat.openai.com/g/g-z9dLWHPID-xcodegpt).
* [Adopting the system player interface in visionOS](https://developer.apple.com/documentation/avkit/adopting_the_system_player_interface_in_visionos)
> _Provide an optimized viewing experience for watching 3D video content._
>
> The recommended way to provide a video playback interface for your visionOS app is to adopt [AVPlayerViewController](https://developer.apple.com/documentation/avkit/avplayerviewcontroller). Using this class makes it simple to provide the same playback user interface and features found in system apps like TV and Music. It also provides essential system integration to deliver an optimal viewing experience whether you’re playing standard 2D content or immersive 3D video with spatial audio. This article describes best practices for presenting the player in visionOS and covers the options the player provides to customize its user interface to best fit your app.
* [Positioning and sizing windows](https://developer.apple.com/documentation/visionos/positioning-and-sizing-windows)
> _Influence the initial geometry of windows that your app presents._
>
> visionOS and macOS enable people to move and resize windows. In some cases, your app can use scene modifiers to influence a window’s initial geometry on these platforms, as well as to specify the strategy that the system employs to place minimum and maximum size limitations on a window. This kind of configuration affects both windows and volumes, which are windows with the [volumetric](https://developer.apple.com/documentation/SwiftUI/WindowStyle/volumetric) window style.
>
> Your ability to configure window size and position is subject to the following constraints:
>
> * The system might be unable to fulfill your request. For example, if you specify a default size that’s outside the range of the window’s resizability, the system clamps the affected dimension to keep it in range.
> * Although you can change the window’s content, you can’t directly manipulate window position or size after the window appears. This ensures that people have full control over their workspace.
> * During state restoration, the system restores windows to their previous position and size.
* [GeometryReader: Blessing or Curse?](https://betterprogramming.pub/geometryreader-blessing-or-curse-1ebd2d5005ec)
> GeometryReader has been present since the birth of SwiftUI, playing a crucial role in many scenarios. However, from the very beginning, some developers have held a negative attitude towards it, believing it should be avoided as much as possible. Especially after the recent updates of SwiftUI added some APIs that can replace GeometryReader, this view has further strengthened.
>
> This article will dissect the “common problems” of GeometryReader to see if it is really so unbearable, and whether those performances criticized as “not meeting expectations” are actually due to problems with the developers’ “expectations” themselves.
* [Swiftie](https://chat.openai.com/g/g-1ex7nJso7-swiftie)
> **An expert Swift developer at your service***
>
> _*Requires ChatGPT Plus_
* [**Reasync.swift**](https://github.com/pointfreeco/swift-concurrency-extras/blob/1676c3b73e1657b9e91f0ca8194855eee4138006/Sources/ConcurrencyExtras/Result.swift)

```swift
extension Result where Failure == Swift.Error {
  /// Creates a new result by evaluating an async throwing closure, capturing the returned value as
  /// a success, or any thrown error as a failure.
  ///
  /// - Parameter body: A throwing closure to evaluate.
  @_transparent
  public init(catching body: () async throws -> Success) async {
    do {
      self = .success(try await body())
    } catch {
      self = .failure(error)
    }
  }
}

Semantically, @_transparent means something like "treat this operation as if it were a primitive operation". The name is meant to imply that both the compiler and the compiled program will "see through" the operation to its implementation.

This has several consequences:

  • Any calls to a function marked @_transparent MUST be inlined prior to doing dataflow-related diagnostics, even under -Onone. This may be necessary to catch dataflow errors.
  • Because of this, a @_transparent function is implicitly inlinable, in that changing its implementation most likely will not affect callers in existing compiled binaries.
  • Because of this, a public or @usableFromInline @_transparent function MUST only reference public symbols, and MUST not be optimized based on knowledge of the module it's in. [The former is caught by checks in Sema.]
  • Debug info SHOULD skip over the inlined operations when single-stepping through the calling function.

This is all that @_transparent means.

Format Styles In Excruciating Detail

Swift’s FormatStyle and ParseableFormatStyle are the easiest way to convert Foundation data types to and from localized strings. Unfortunately Apple hasn’t done a great job in documenting just what it can do, or how to use them.

Swift Macros were introduced by Apple as a feature bundled within Swift Packages. This approach enhances shareability—a notable limitation of XcodeProj elements like targets. However, it also tightens the reliance on seamless integration between Xcode and the Swift Package Manager (SPM), which, from my experience and that of others, can be less than ideal in large projects with numerous dependencies. In fact, some developers are shifting towards Tuist’s methodology, reminiscent of CocoaPods, where projects are immediately ready for compilation upon opening.

Given the suboptimal experience offered by Apple’s ecosystem, which precludes optimization opportunities, Tuist employs SPM to resolve packages before mapping them onto Xcodeproj elements. While generally effective, this approach has encountered occasional setbacks, which developers can rectify by tweaking the build settings of the generated targets. Yet, it has not supported Swift Macros since their announcement.

Interestingly, developers managing Xcode rules for Bazel quickly devised a method to accommodate Swift Macros using compiler flags. Inspired by this, could Tuist adopt a similar strategy by utilizing targets, dependencies, and build settings? After some investigation, the answer is affirmative. Here’s the blueprint:

The macro’s representative target must be a macOS command-line target, encompassing the macro’s source code. A secondary, dependent target is required, hosting the public macro definition for import by other targets.

Targets wishing to leverage the macro should:

  • Establish a dependency on the secondary target for prior compilation.
  • Include the setting OTHER_SWIFT_FLAGS with the value -load-plugin-executable $BUILT_PRODUCTS_DIR/ExecutableName\#ExecutableName.

This setup is contingent upon the secondary target and the dependent targets producing their outputs in the same directory. If that’s not the case, SWIFT_INCLUDE_PATHS will be necessary to make the module available to the dependent targets.

The new Swift 5.9 release contains a number of helpful, new features for debugging code, including an out-of-process, interactive crash handler to inspect crashes in real time, the ability to trigger the debugger for just-in-time debugging, along with concurrency-aware backtracing to make it easier to understand control flow in a program that uses structured concurrency.

Today we are announcing our new library, automerge-repo, which makes it vastly easier to build local-first applications with Automerge. Take a look at our quickstart guide or read on for some background and examples.

For those new to this idea: local-first applications are a way of building software that allows both real-time collaboration (think Google Docs) and offline working (think Git). They work by storing the user's data locally, on their own device, and syncing it with collaborators in the background. You can read more about the motivation for local-first software in our essay, or watch a talk introducing the idea.

A challenge in local-first software is how to merge edits that were made independently on different devices, and CRDTs were developed to solve this problem. Automerge is a fairly mature CRDT implementation. In fact, we wrote this blog post using it! The API is quite low-level though, and Automerge-Core has no opinion about how networking or storage should be done. Often, the first thing developers ask after discovering Automerge was how to connect it into an actual application.

Our new library, automerge-repo, extends the collaboration engine of Automerge-Core with networking and storage adapters, and provides integrations with React and other UI frameworks. You can get to building your app straight away by taking advantage of default implementations that solve common problems such as how to send binary data over a WebSocket, how often to send synchronization messages, what network format to use, or how to store data in places like the browser's IndexedDB or on the filesystem.

We explained the implementation details of an event-sourced functional domain model in Kotlin. In the process, we attempted to show how straightforward testing of such a model can be and how it doesn't require any dedicated testing techniques or tools. The model itself, on the other hand, remains a rich core of business logic.

The following sections are general guidelines that describe fundamental Mac layout principles of center equalization, text and control alignment, appropriate use of white space, and visual balance. Following these guidelines will help you create functional and aesthetically pleasing windows that are easy for Mac users to understand and use.

As you layout your window, remember to observe the principle of consistency in your decisions. If you have good reasons to break some layout guidelines, be sure to do it in a consistent way. Users tend to ignore symmetry and balance but will notice inconsistency.

Inconsistencies in a window can also lead users to conclude that the window was poorly designed and/or implemented. For example, users won’t notice if the margins inside your window edges are 18 points wide (instead of the recommended 20 points), but are likely to notice if the left margin is wider than the right one.

A Boolean value indicating whether this character represents a newline.

Learn about the supported use cases for low-level networking on watchOS.

watchOS groups networking into two categories:

watchOS allows all apps to use high-level networking equally. However, it only allows an app to use low-level networking under specific circumstances:

  • It allows an audio streaming app to use low-level networking while actively streaming audio. Support for this was introduced in watchOS 6.
  • It allows a VoIP app to use low-level networking while running a call using CallKit. Support for this was added in watchOS 9.
  • It allows an app on watchOS to set up an application service listener so that the same app on tvOS can establish a low-level connection to it using the DeviceDiscoveryUI framework. Support for this was added in watchOS 9 and tvOS 16.

Bonjour, also known as zero-configuration networking, enables automatic discovery of devices and services on a local network using industry standard IP protocols. Bonjour makes it easy to discover, publish, and resolve network services with a sophisticated, easy-to-use programming interface that is accessible from Cocoa, Ruby, Python, and other languages.

With Visual Look Up, you can identify and learn about popular landmarks, plants, pets, and more that appear in your photos and videos in the Photos app . Visual Look Up can also identify food in a photo and suggest related recipes.

Starting from iOS 17 we now have new properties, such as secondary, tertiary, quaternary and quinary that are defined on an instance of a ShapeStyle. To get hierarchical background colors we simply have to access these properties on the current background style: BackgroundStyle().secondary. BackgroundStyle in SwiftUI conforms to ShapeStyle protocol, so accessing the secondary property on an instance of a BackgroundStyle will return the second level of the background in the current context that depends on the operating system and color scheme (light or dark mode enabled).

We can also get the current background style from the static background property defined on ShapeStyle.

Containers have fundamentally changed the way that modern software is developed and deployed. Containers are supported by a wide range of operating systems including FreeBSD, Solaris, Linux and even Windows, but are not natively supported by macOS. Until now.

Create an immersive experience by making your app’s content respond to the local shape of the world.

Finally, a display that can go with you anywhere.

Orion turns iPad into your portable HDMI Monitor.

Dot by New Computer is an intelligent guide designed to help you remember, organize, and navigate your life.

The Swift package ecosystem has thousands of packages to help you with all kinds of tasks across your projects. You’ll find networking, testing, UI helpers, logging, animation, and many more packages that work with the Swift Package Manager (SwiftPM).

The gist of the talk is: what if we could define a predicate for even numbers like this in Agda (or Coq or Idris or Lean):

```agda

data Even : Nat → Set where zero : Even 0 suc : ∀ {n} → not (Even n) → Even (suc n)

> So `0` is even, and `suc n` is even if `n` isn't.
>
> But if we get this to work, then what would this declaration mean?

   ```agda
data Liar : Set where
  liar : not Liar → Liar

The Misty Programming Language is a dynamic, general-purpose, transitional, actor language. It has a gentle syntax that is intended to benefit students, as well as advanced features such as capability security and lambdas with lexical scoping.

The grammar of the language is expressed in McKeeman Form.

Imagine if you could say to ChatGPT, "Go try out my app for 5 minutes and let me know what you think about the getting started experience." Or if you could ask questions like... Does my iOS app's GUI follow common practices? Is it accessible? What are some examples of apps that use these specific UI controls on the same screen?

If we had a rich database of app GUIs and the right ML models, then we could answer these questions and build a copilot tool that "understands" the visual and interaction designs of GUIs, not just the code!

October

However, wouldn’t it also be nice to see all of these pull requests at once? That is, all of your own pull requests, everything assigned to review, and everything where you are actively in discussion? You can, with the filter involves:@me, which will show you all the pull requests you are involved in, in any capacity. In other words, it shows you everything that requires your attention.

Open sourcing our Swift bindings generator for WinRT — and an end-to-end sample application for anyone looking to build a modern Windows application in Swift.

Learn how CPUs work, and discover Apple’s underrated competitive advantage.

Skip is your automated Android development partner. As you develop your modern Swift and SwiftUI iOS app, Skip's intelligent transpiler generates the equivalent Kotlin and Compose Android app alongside it. Deliver fully native apps for both the App Store and Play Store with one team, one language, and one codebase.

Want to learn more? Take our video tour or browse the documentation.

Break down software development programs into a self-explanatory sequence or hash out the steps involved in getting your startup off the ground. An event storming session enlightens the whole team.

An increasing number of the machine learning (ML) models we build at Apple each year are either partly or fully adopting the Transformer architecture. This architecture helps enable experiences such as panoptic segmentation in Camera with HyperDETR, on-device scene analysis in Photos, image captioning for accessibility, machine translation, and many others. This year at WWDC 2022, Apple is making available an open-source reference PyTorch implementation of the Transformer architecture, giving developers worldwide a way to seamlessly deploy their state-of-the-art Transformer models on Apple devices.

This implementation is specifically optimized for the Apple Neural Engine (ANE), the energy-efficient and high-throughput engine for ML inference on Apple silicon. It will help developers minimize the impact of their ML inference workloads on app memory, app responsiveness, and device battery life. Increasing the adoption of on-device ML deployment will also benefit user privacy, since data for inference workloads remains on-device, not on the server.

In this article we share the principles behind this reference implementation to provide generalizable guidance to developers on optimizing their models for ANE execution. Then, we put these principles into action and showcase how to deploy an example pretrained Transformer model, the popular Hugging Face distilbert, in just a few lines of code. Notably, this model, which works out-of-the-box and on device using Core ML already, is up to 10 times faster and consumes 14 times less memory after our optimizations.

SkinGenerator.io uses custom generative art models to generate skins for video games. With a simple text prompt you can create characters quickly and easily. You're limited only by your imagination!

I've found a very strange behaviour which looks like a bug. SwiftUI sheet or fullScreenCover do not release objects that were passed to its item: parameter (and to the view builder body by the way, but here is the simplified case).

It works well and memory is releasing on iOS 16 built with both Xcode 14 or 15. (simulators, devices)

Memory is leaking and NOT releasing on iOS 17 built with Xcode 15. (simulator, device 17.0.2)

We present iLeakage, a transient execution side channel targeting the Safari web browser present on Macs, iPads and iPhones. iLeakage shows that the Spectre attack is still relevant and exploitable, even after nearly 6 years of effort to mitigate it since its discovery. We show how an attacker can induce Safari to render an arbitrary webpage, subsequently recovering sensitive information present within it using speculative execution. In particular, we demonstrate how Safari allows a malicious webpage to recover secrets from popular high-value targets, such as Gmail inbox content. Finally, we demonstrate the recovery of passwords, in case these are autofilled by credential managers.

Simulate and render 3D content for use in your augmented reality apps.

RealityKit provides high-performance 3D simulation and rendering capabilities you can use to create visionOS apps or to create augmented reality (AR) apps for iOS, macOS, and tvOS. RealityKit is an AR-first 3D framework that leverages ARKit to seamlessly integrate virtual objects into the real world.

Use RealityKit’s rich functionality to create compelling augmented reality (AR) experiences.

  • Create and import full RealityKit scenes with models, animations, and Spatial Audio by using Reality Composer Pro for visionOS.
  • Build or modify scenes at runtime by adding 3D models, shape primitives, and sounds from code.
  • Have virtual objects interact with objects in the real-world.
  • Animate objects, both manually and with physics simulations.
  • Respond to user input and changes in a person’s surroundings.
  • Synchronize across devices and use SharePlay to enable group AR experiences.

Learn how everything fits together in RealityKit.

RealityKit is a 3D framework designed for building apps, games, and other immersive experiences. Although it’s built in an object-oriented language and uses object-oriented design principles, RealityKit’s architecture avoids heavy use of composition — where objects are built by adding instance variables that hold references to other objects — in favor of a modular design based on a paradigm called Entity Component System (ECS) that divides application objects into one of three types.

Following the ECS paradigm allows you to re-use the functionality contained in a component in many different entities, even if they have very different inheritance chains. Even if two objects have no common ancestors other than Entity, you can add the same components to both of them and give them the same behavior or functionality.

Spatial layout techniques help you take advantage of the infinite canvas of Apple Vision Pro and present your content in engaging, comfortable ways.

In visionOS, you can design apps and games that extend beyond windows and volumes, and let people immerse themselves in your content.

In visionOS, people can run multiple apps at the same time in the Shared Space or concentrate on a single app at a time in a Full Space. By default, your app launches in the Shared Space, where people can switch between multiple running apps much as they do on a Mac. When they want a more immersive experience, people can transition your app to a Full Space, where other apps hide and your app can display content anywhere.

View various examples of .M3U8 files formatted to index streams and .ts media segment files on your Mac, iPhone, iPad, and Apple TV.

The fast, native SQLite database editor for macOS.

Create and manipulate 3D mathematical primitives.

The Spatial module is a lightweight 3D mathematical library that provides a simple API for working with 3D primitives. Much of its functionality is similar to the 2D geometry support in Core Graphics, but in three dimensions.

Dive into the details of Swift concurrency and discover how Swift provides greater safety from data races and thread explosion while simultaneously improving performance. We'll explore how Swift tasks differ from Grand Central Dispatch, how the new cooperative threading model works, and how to ensure the best performance for your apps. To get the most out of this session, we recommend first watching “Meet async/await in Swift,” “Explore structured concurrency in Swift,” and “Protect mutable state with Swift actors.”

In visionOS, the VideoPlayerComponent is another way to create a video scene (including for HEVC video with transparency).

When people wear Apple Vision Pro, they enter an infinite 3D space where they can engage with your app or game while staying connected to their surroundings.

Help people stay comfortable when playing video in your app. Often, an app doesn’t control the content in the videos it plays, but you can help people stay comfortable by:

  • Letting them choose when to start playing a video
  • Using a small window for playback, letting people resize it if they want
  • Making sure people can see their surroundings during playback

In a fully immersive experience, avoid letting virtual content obscure playback or transport controls. In a fully immersive context, the system automatically places the video player at a predictable location that provides an optimal viewing experience. Use this location to help make sure that no virtual content occludes the default playback or transport controls in the ornament near the bottom of the player.

Avoid automatically starting a fully immersive video playback experience. People need control over their experience and they’re unlikely to appreciate being launched into a fully immersive video without warning.

Create a thumbnail track if you want to support scrubbing. The system displays thumbnails as people scrub to different times in the video, helping them choose the section they want. To improve performance, supply a set of thumbnails that each measure 160 px in width. For developer guidance, see HTTP Live Streaming (HLS) Authoring Specification for Apple Devices > Trick Play.

Avoid expanding an inline video player to fill a window. When you display the system-provided player view in a window, playback controls appear in the same plane as the player view and not in an ornament that floats above the window. Inline video needs to be 2D and you want to make sure that window content remains visible around the player so people don’t expect a more immersive playback experience. For developer guidance, see AVPlayerViewController.

Use a RealityKit video player if you need to play video in a view like a splash screen or a transitional view. In situations like these, people generally expect the video to lead into the next experience, so they don’t need playback controls or system-provided integration, like dimming and view anchoring. The RealityKit video player automatically uses the correct aspect ratio for both 2D and 3D video and supports closed captions. RealityKit can also help you play video as a special effect on the surface of a custom view or object. For developer guidance, see RealityKit.

Subtle, expressive sounds are everywhere in visionOS, enhancing experiences and providing essential feedback when people look at a virtual object and use gestures to interact with it. The system combines audio algorithms with information about a person’s physical surroundings to produce Spatial Audio, which is sound that people can perceive as coming from specific locations in space, not just from speakers.

In visionOS, audio playback from the Now Playing app pauses automatically when people close the app’s window, and audio from an app that isn’t the Now Playing app can duck when people look away from it to different app.

Prefer playing sound. People generally choose to keep sounds audible while they’re wearing the device, so an app that doesn’t play sound — especially in an immersive moment — can feel lifeless and may even seem broken. Throughout the design process, look for opportunities to create meaningful sounds that aid navigation and help people understand the spatial qualities of your app.

Design custom sounds for custom UI elements. In general, a system-provided element plays sound to help people locate it and receive feedback when they interact with it. To help people interact with your custom elements, design sounds that provide feedback and enhance the spatial experience of your app.

Use Spatial Audio to create an intuitive, engaging experience. Because people can perceive Spatial Audio as coming from anywhere around them, it works especially well in a fully immersive context as a way to help an experience feel lifelike. Ambient audio provides pervasive sounds that can help anchor people in a virtual world and an audio source can sound like it comes from a specific object. As you build the soundscape for your app, consider using both types of audio.

Consider defining a range of places from which your app sounds can originate. Spatial Audio helps people locate the object that’s making sound, whether it’s stationary or moving in space. For example, when people move an app window that’s playing audio, the sound continues to come directly from the window, wherever people move it.

Consider varying sounds that people could perceive as repetitive over time. For example, the system subtly varies the pitch and volume of the virtual keyboard’s sounds, suggesting the different sounds a physical keyboard can make as people naturally vary the speed and forcefulness of their typing. An efficient way to achieve a pleasing variation in sound is to randomize a sound file’s pitch and volume during playback, instead of creating different files.

Decide whether you need to play sound that’s fixed to the wearer or tracked by the wearer. People perceive fixed sound as if it’s pointed at them, regardless of the direction they look or the virtual objects they move. In contrast, people tend to perceive tracked sound as coming from a particular object, so moving the object closer or farther away changes what they hear. In general, you want to use tracked sound to enhance the realism of your experience, but there could be cases where fixed sound is a good choice. For example, Mindfulness uses fixed sound to envelop the wearer in an engaging, peaceful setting.

Welcome to the future of media production

The creator economy will always struggle so long as the tooling is controlled by some massive body that can set the terms of creation. The solution is to make the tools cheap enough to scale and accessible to the point where they’re table stakes.

Create AR games and experiences that interact with real-world objects on LiDAR-equipped iOS devices.

Apple devices integrate hardware, software, apps, and services to let you manage your deployment projects easily. Get the control and flexibility you want by using Apple School Manager or Apple Business Manager and your chosen mobile device management solution.

See What’s new in Apple Platform Deployment >

The Reveal team have spent more than a year building the new Insights workspace, working with industry experts and organisations to encode their expertise into more than 130 rules that pinpoint areas of improvement for your app. These rules are based on a combination of industry and platform-specific guidelines, best practices and over a decade of our own experience developing apps.

Insights runs a powerful, accessibility-focused audit of your app using information extracted from both the visual and accessible user interfaces of your app. This combination of the accessible and visual user interfaces is unique to Reveal, and allows us to provide a level of insight that is more powerful than anything else available for iOS, iPadOS and tvOS.

This new functionality takes Reveal from a passive developer tool, where you need to know what to look for in order to identify issues, to one that proactively surfaces problems along with suggestions on how to fix them. This radically improves developer efficiency and allows you to see and address issues you may never have know existed in your apps.

This document explains a more controlled installation procedure for Lean on MacOS. There is a quicker way described in the main install page but it requires more trust.

Leverage 3D video and Spatial Audio to deliver an immersive experience.

Destination Video is a multiplatform video-playback app for visionOS, iOS, and tvOS. People get a familiar media-browsing experience navigating the libraryʼs content and playing videos they find interesting. The app provides a similar experience on supported platforms, but leverages unique features of visionOS to create a novel, immersive playback experience.

Design RealityKit scenes for your visionOS app.

Use Reality Composer Pro to visually design, edit, and preview RealityKit content. In Reality Composer Pro, you can create one or more scenes, which act as a container for RealityKit content. Scenes contain hierarchies of entities, which are virtual objects such as 3D models.

A material that supports animated textures.

In RealityKit, a material is an object that defines the surface properties of a rendered 3D object. A VideoMaterial is a material that maps a movie file on to the surface of an entity. Video materials are unlit, which means that scene lighting doesn’t affect them. Video materials support transparency if the source video’s file format also supports transparency.

Video materials use an AVPlayer instance to control movie playback. You can use any movie file format that AVPlayer supports to create a video material. To control playback of the material’s video, use the avPlayer property, which offers methods like play() and pause().

The following code demonstrates how to create and start playing a video material using a movie file from your application bundle.

With custom properties, you can add metadata to repositories in your organization. You can use those properties to target repositories with rulesets.

This is going to be an unusual Papers We Love talk, as I'm going to discuss a bunch of different papers in relatively light detail but with rich historical context. I'll post a transcript and mini-site for this talk that has links to all these papers, and many others that serve as supporting material.

Last year, I gave a talk at Strange Loop where I said everyone is programming wrong. This upset some people, so I thought I'd make a more modest claim today: Everyone is doing mathematics wrong

The App That Opens Apps

WHAT HAPPENS WHEN YOU OPEN THAT APP?

macOS checks every app against a slew of security features: Gatekeeper, notarization, hardening, entitlements and more. But it doesn't show you the result of these checks, preferring to keep these behind the scenes — either the app opens or it doesn't, perhaps with an “app downloaded from the internet” dialog first.

Even “code signatures” — around since Mac OS X 10.5 (Leopard!) — are visible only via arcane Terminal incantations.

WHAT'S INSIDE THAT APP?

Today's macOS apps are often made up of many pieces. Some of these are organizational artifacts, of interest only to the developers.

But some pieces of an app can extend macOS itself — Share Extensions, Today Widgets, Safari Extensions, Quick Look generators, Spotlight importers, and so on. macOS lets you see and manage some of these, but not all of them, and not all in one place.

ENTER APPARENCY — THE APP THAT OPENS APPS.

Control-click on an app in the Finder, choose Open With Apparency, and see all the details in one place

The MLC LLM iOS app can be installed in two ways: through the pre-built package or by building from source. If you are an iOS user looking to try out the models, the pre-built package is recommended. If you are a developer seeking to integrate new features into the package, building the iOS package from source is required.

Take a tour of some of the most terrifying and mind-blowing destinations in our galaxy ... and beyond. After a visit to these nightmare worlds, you may never want to leave Earth again! You can also download our free posters — based on real NASA science — if you dare.

A Boolean value that indicates whether the process is an iPhone or iPad app running on a Mac.

The value of this property is true only when the process is an iOS app running on a Mac. The value of the property is false for all other apps on the Mac, including Mac apps built using Mac Catalyst. The property is also false for processes running on platforms other than macOS.

This is a library for creating full screen effects in SwiftUI in an easy, seamless way. The package is available on Swift Package Manager (SPM) and fully open-source.

SwiftTreeSitter provides a Swift interface to the tree-sitter incremental parsing system. You can use it to parse language text, reparse it as it changes, and query the syntax tree.

In this article, I'll go through various options to implement SwiftFormat on a production project rather than how to integrate the tool locally (this is relatively straightforward). This is because I believe the tool provides greater benefit as an automation for every developer on the team, rather than a few only. We'll also cover the relative advantages or disadvantages. I hope that this will help you choose the right tool for your project.

Use RealityKit to create an interactive ride in visionOS.

Swift Splash uses multiple Reality Composer Scenes to create prepackaged entity hierarchies that represent each of the slide pieces the player connects to construct their ride. It demonstrates how to hide and reveal sections of the entity hierarchy based on the current state of the app. For example, each slide piece contains an animated fish entity that’s hidden until the ride runs and the fish arrives at that particular piece. While Swift Splash is a fun, game-like experience, the core idea of assembling virtual objects out of predefined parts can also be used as the basis for a productivity or creation app.

Tart is a virtualization toolset to build, run and manage macOS and Linux virtual machines on Apple Silicon.

A sound and fast distributed version control system based on a mathematical theory of asynchronous work.

There’s no straightforward way to avoid this problem, but Xcode build settings offer a sophisticated (albeit brain-bendingly obtuse) mechanism for varying the value of a build setting based on arbitrary conditions. Technically this technique could be used in Xcode’s build settings editor, but because of the complexity of variable definitions, it’s a lot easier (not to mention easier to manage with source control) if you declare such settings in an Xcode configuration file. The examples below will use the declarative format used by these files.

The key to applying this technique is understanding that build settings can themselves be defined in terms of other build settings.

This is the spoken word version of my essay, Choose Boring Technology. I have largely come to terms with it and the reality that I will never escape its popularity.

I gave this most recently at the WikiMedia Foundation’s developer conference, where Scott Ananian called it “how to be old, for young people.” Here are my other talks, my website, and some Medium posts.

September

I made this game to teach my daughter how buffer overflows work. Looking at programs as something you can play with, and poke and twist and make it do something else, is my favourite part of modern computing. I think its the right way to look at programs. When your microwave oven gets an update and starts crashing, you can hack it. Or when your keyboard controller’s firmware is bad, you can hack it (looking at you vortex pok3r). She is 12 yo now but her assembler skills are getting better and better, hopefully one day she will be able to hack her own keyboard :)

The game is about creating a small shellcode in memory by copying existing instructions and then exploiting a buffer overflow to jump into it, so that you can overwrite your opponent’s return address to force them to go to the game_over() function.There are other mechanics as well and more layers of strategy (like setting the exception handler or monkeypatching).

All players share the same memory, and execute the same program, while time sharing the same processor running preemptive scheduling os, so each turn the player executes 10 instructions, after that the process is interrupted by the operating system, and its the other player's turn. Each player's stack pointer starts at a different location. There is no virtual memory.

Faster variable inspection with p and po

LLDB provides the shorthand p command alias to inspect variables and po to call the debugDescription property of objects. Originally, these were aliases for the rather heavyweight expression and expression -O commands. In Swift 5.9, the p and po command aliases have been redefined to the new dwim-print command.

The dwim-print command prints values using the most user-friendly implementation. “DWIM” is an acronym for “Do What I Mean”. Specifically, when printing variables, dwim-print will use the same implementation as frame variable or v instead of the more expensive expression evaluator.

In addition to being faster, using p no longer creates persistent result variables like $R0, which are often unused in debugging sessions. Persistent result variables not only incur overhead but also retain any objects they contain, which can be an unexpected side effect for the program execution. Support for generic type parameters in expressions

LLDB now supports referring to generic type parameters in expression evaluation. For example, given the following code:

```swift
func use<T>(_ t: T) {
    print(t) // break here
}

use(5)
use("Hello!")
```

Running po T.self, when stopped in use, will print Int when coming in through the first call, and String in the second.

Fine-grained scope information

The Swift compiler now emits more precise lexical scopes in debug information, allowing the debugger to better distinguish between different variables, like the many variables named x in the following example:

```swift
func f(x: AnyObject?) {
  // function parameter `x: AnyObject?`
  guard let x else {}
  // local variable `x: AnyObject`, which shadows the function argument `x`
  ...
}
> In Swift 5.9, the compiler now uses more accurate `ASTScope` information to generate the lexical scope hierarchy in the debug information, which results in some behavior changes in the debugger.
* [Twitter Network Layer (a.k.a TNL)](https://github.com/twitter/ios-twitter-network-layer)
> The **Twitter Network Layer (TNL)** is a framework for interfacing with the **Apple** provided `NSURLSession` stack that provides additional levels of control and insight over networking requests, provides simple configurability and minimizes the cognitive load necessary to maintain a robust and wide-reaching networking system.
* [Interacting with your app in the visionOS simulator](https://developer.apple.com/documentation/visionos/interacting-with-your-app-in-the-visionos-simulator)
> #### Interact with your appin page link
>
> To use your Mac’s pointer and keyboard to create gestures, choose “Select to interact with the scene” from the buttons at the bottom-right of a visionOS simulator window. The current gaze position tracks your pointer movements when you hover over content within the space.
>
> Use the following actions to trigger gestures:
> | Gesture | To simulate |
> | --- | --- |
> | Tap | Click. |
> | Double-tap | Double-click. |
> | Touch and hold | Click and hold. |
> | Drag (left, right, up, and down) | Drag left, right, up, and down. |
> | Drag (forward and back) | Shift-drag up and down. |
> | Two-handed gestures | Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points. |
>
> Activate device buttons using menu items or by clicking the controls in the simulator window toolbar.
>
> #### Navigate the spacein page link
>
> Use your Mac’s pointer and the keyboard to reposition your viewpoint in a visionOS simulator window:
> | Movement | To simulate |
> | --- | --- |
> | Forward | Press the W key (or Up Arrow key), or perform a pinch gesture moving two fingers away from each other on a trackpad. |
> | Backward | Press the S key (or Down Arrow key), or perform a pinch gesture moving two fingers toward each other on a trackpad. |
> | Left | Press the A key (or Left Arrow key), or scroll left using a trackpad or Magic Mouse. |
> | Right | Press the D key (or Right Arrow key), or scroll right using a trackpad or Magic Mouse. |
> | Up | Press the E key, or scroll up using a trackpad or Magic Mouse. |
> | Down | Press the Q key, or scroll down using a trackpad or Magic Mouse. |
* [Understanding RealityKit’s modular architecture](https://developer.apple.com/documentation/visionos/understanding-the-realitykit-modular-architecture)
> RealityKit is a 3D framework designed for building apps, games, and other immersive experiences. Although it’s built in an object-oriented language and uses object-oriented design principles, RealityKit’s architecture avoids heavy use of composition — where objects are built by adding instance variables that hold references to other objects — in favor of a modular design based on a paradigm called Entity Component System (ECS) that divides application objects into one of three types.
>
> Following the ECS paradigm allows you to re-use the functionality contained in a component in many different entities, even if they have very different inheritance chains. Even if two objects have no common ancestors other than [Entity](https://developer.apple.com/documentation/RealityKit/Entity), you can add the same components to both of them and give them the same behavior or functionality.
* [Bezel](https://getbezel.app)
> _Show your iPhone on your Mac_
>
> Bezel is the easiest way to view, present and record an iPhone.
* [Prompt engineering for Claude's long context window](https://www.anthropic.com/index/prompting-long-context)
> Claude’s [100,000 token long context window enables](https://www.anthropic.com/index/100k-context-windows) the model to operate over hundreds of pages of technical documentation, or even an [entire book](https://twitter.com/AnthropicAI/status/1656700156518060033). As we continue to scale the Claude API, we’re seeing increased demand for prompting guidance on how to maximize Claude’s potential. Today, we’re pleased to share a quantitative case study on two techniques that can improve Claude’s recall over long contexts:
> * Extracting reference quotes relevant to the question before answering
> * Supplementing the prompt with examples of correctly answered questions about other sections of the document
* [A New API Direction for Testing in Swift](https://github.com/apple/swift-testing/blob/main/Documentation/Vision.md)
> A key requirement for the success of any developer platform is a way to use automated testing to identify software defects. Better APIs and tools for testing can greatly improve a platform’s quality. Below, we propose a new API direction for testing in Swift.
>
> We start by defining our basic principles and describe specific features that embody those principles. We then discuss several design considerations in-depth. Finally, we present specific ideas for delivering an all-new testing API in Swift, and weigh them against alternatives considered.
* [Why not React?](https://dev.to/tigt/why-not-react-2f8l)
> This analysis is harsh on React’s MPA suitability. But is that so odd?
>
> It was created to client-render non-core bits of Facebook. Its maintainers only recently used it for server rendering, navigation, or delivering traditional web content. In fact, [its SSR was a happy accident](https://twitter.com/sebmarkbage/status/1516907614566854659). And finally, [longstanding evidence holds React trends antagonistic towards performance](https://timkadlec.com/remembers/2020-04-21-the-cost-of-javascript-frameworks).
>
> _Why **would** React be good at the things we ask it to do?_
>
> With the FB5 redesign, Facebook is finally using React in the ways that we are, [and they have found it wanting](https://twitter.com/dan_abramov/status/1259614150386425858). On the one hand, this means React will surely become much better at desirable SSR features. On the other, _when_ this will happen is unsure, it will heavily change React’s roadmap, and React could change so much that familiarity with how it works today could be a liability rather than a strength.
>
> * For the target audience of rural/new/poorly-connected customers, does Facebook even use React to serve them? Did FB5 change anything, or **does `m.facebook.com` still not use React?**
> * If we want a version of Kroger.com as fast as the demo, but **using the same framework, processes, management, and developers as the existing site — wouldn’t that just become our existing site?** We can’t change our personnel, but we can change the technologies we build on.
> * Last, but certainly not least: **can you make an industry-beating app out of industry-standard parts?**
* [A different way to build together](https://pierre.co)
> Pierre is reimagining industry-old primitives for a new developer platform that brings together your entire team to build and ship software. Branches, not pull requests. Bots, not CI. Features you'll love, not the kitchen sink.
* [Interoperability: Swift’s Super Power](https://browsercompany.substack.com/cp/137231709)
> Swift’s deliberate design choices over the years has resulted in a language that showcases how flexibility and compatibility do not need to come at the cost of usability. One of these design choices was Swift’s focus on native interoperability with other languages. The flexibility that this enables makes it a joy to build rich, native experiences in Swift across a variety of environments.
>
> Traditionally when two languages need to interoperate, the function calls at the boundary between the two languages, also known as the Foreign Function Interface (FFI), will go through C using a library like libffi. This approach has some drawbacks such as incurred runtime performance costs and possibly extra boilerplate code. Instead, Swift embeds a copy of clang, the C and C++ compiler, which is able to directly translate between the languages avoiding penalties in code size and runtime performance. This level of interoperability composes wonderfully with existing systems and enables building complex software atop existing C libraries.
* [Deploying Transformers on the Apple Neural Engine](https://machinelearning.apple.com/research/neural-engine-transformers)
> An increasing number of the machine learning (ML) models we build at Apple each year are either partly or fully adopting the [Transformer architecture](https://arxiv.org/abs/1706.03762). This architecture helps enable experiences such as [panoptic segmentation in Camera with HyperDETR](https://machinelearning.apple.com/research/panoptic-segmentation), [on-device scene analysis in Photos](https://machinelearning.apple.com/research/on-device-scene-analysis), [image captioning for accessibility](https://support.apple.com/guide/iphone/use-voiceover-for-images-and-videos-iph37e6b3844/ios), [machine translation](https://apps.apple.com/us/app/translate/id1514844618), and many others. This year at WWDC 2022, Apple is making available an open-source reference [PyTorch](https://pytorch.org) implementation of the Transformer architecture, giving developers worldwide a way to seamlessly deploy their state-of-the-art Transformer models on Apple devices.
>
> This implementation is specifically optimized for the Apple Neural Engine (ANE), the energy-efficient and high-throughput engine for ML inference on Apple silicon. It will help developers minimize the impact of their ML inference workloads on app memory, app responsiveness, and device battery life. Increasing the adoption of on-device ML deployment will also benefit user privacy, since data for inference workloads remains on-device, not on the server.
>
> In this article we share the principles behind this reference implementation to provide generalizable guidance to developers on optimizing their models for ANE execution. Then, we put these principles into action and showcase how to deploy an example pretrained Transformer model, the popular [Hugging Face distilbert](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english), in just a few lines of code. Notably, this model, which works out-of-the-box and on device using Core ML already, is up to 10 times faster and consumes 14 times less memory after our optimizations.
* [Precise error typing in Swift](https://forums.swift.org/t/precise-error-typing-in-swift/52045)
> But there are arguments in favor of precise error typing as well. It’s been six years; we should look at those reasons and consider whether it’s time to add precise error typing to Swift. And that’s what I’m about to do.
>
> I’ll spoil the conclusion: I think the answer is “yes”. But it’s a “yes” with a very big caveat. I think the strongest reasons for adding precise error typing relate to (1) the interaction of `throws` with the generics system and (2) the requirements of low-level systems code. And so I think that we, as a community, should continue to strongly push imprecise error typing as the primary way that people ought to write code. Precise error typing will be a tool that programmers have in their toolbox for the cases where it’s really important (mostly, for low-level reliability and performance), and it will solve some expressivity problems for generic libraries. But when you don't _need_ that tool, you should stick to throwing `Error`.
>
> I want to be clear that this is not a guarantee that the feature is coming, or even a plan of attack. The main reason I'm writing this is because it keeps coming up in proposal reviews: there are a lot of library features that need to decide whether and how to accommodate the possibility of precise error typing. So I think it's very important to have a conversation about where we're going with this, and I hope that starts now.
* [**Swift Macro Testing**](https://github.com/pointfreeco/swift-macro-testing)
> Magical testing tools for Swift macros.
* [**Swift Version**](https://swiftversion.net)
* [SQL join flavors](https://antonz.org/sql-join)
> There is more to SQL joins than you might think. Let's explore them a bit.
> * [Qualified join](https://antonz.org/sql-join/#qualified-join)
> * [Natural join](https://antonz.org/sql-join/#natural-join)
> * [Cross join](https://antonz.org/sql-join/#cross-join)
> * [Partitioned join](https://antonz.org/sql-join/#partitioned-join)
> * [Lateral join](https://antonz.org/sql-join/#lateral-join)
> * [Summary](https://antonz.org/sql-join/#summary)
* [Access-level modifiers on import declarations [SE-0409]](https://github.com/apple/swift-evolution/blob/main/proposals/0409-access-level-on-imports.md)
> **Declaring the access level of an imported module**
>
> The access level is declared in front of the import declaration using some of the modifiers used for a declaration: `public`, `package`, `internal`, `fileprivate`, and `private`.
>
> A public dependency can be referenced from any declaration and will be visible to all clients. It is declared with the `public` modifier.

   ```swift
   public import PublicDependency
   ```
> A dependency visible only to the modules of the same package is declared with the `package` modifier. Only the signature of `package`, `internal`, `fileprivate` and `private` declarations can reference the imported module.
```swift
package import PackageDependency

A dependency internal to the module is declared with the internal modifier. Only the signature of internal, fileprivate and private declarations can reference the imported module.

internal import InternalDependency

A dependency private to this source file is declared with either the fileprivate or the private modifier. In both cases the access is scoped to the source file declaring the import. Only the signature of fileprivate and private declarations can reference the imported module.

fileprivate import DependencyPrivateToThisFile
private import OtherDependencyPrivateToThisFile

The open access-level modifier is rejected on import declarations.

TL;DR: We released a new macOS app called Cilicon, which provisions and runs ephemeral virtual machines for CI. Using it, we were able to switch to self-hosted Actions Runners and speed up our CI by 3x while giving some of our damaged M1 MacBook Pro devices a second life.

I believe that ML is a new way to build software, and I know that many Swift developers want to incorporate AI features in their apps. The ML ecosystem has matured a lot, with thousands of models that solve a wide variety of problems. Moreover, LLMs have recently emerged as almost general-purpose tools – they can be adapted to new domains as long as we can model our task to work on text or text-like data. We are witnessing a defining moment in computing history, where LLMs are going out of research labs and becoming computing tools for everybody.

However, using an LLM model such as Llama in an app involves several tasks which many people face and solve alone. We have been exploring this space and would love to continue working on it with the community. We aim to create a set of tools and building blocks that help developers build faster.

Today, we are publishing this guide to go through the steps required to run a model such as Llama 2 on your Mac using Core ML. We are also releasing alpha libraries and tools to support developers in the journey. We are calling all Swift developers interested in ML – is that all Swift developers? – to contribute with PRs, bug reports, or opinions to improve this together.

Here are two rules for working with representables:

  • When updating a UIView in response to a SwiftUI state change, we need to go over all the representable’s properties, but only change the UIView properties that need it.
  • When updating SwiftUI in response to a UIKit change, we need to make sure these updates happen asynchronously.

If we don’t follow these rules, there are a few issues we might see:

  • The dreaded “Modifying state during view update, this will cause undefined behavior” warning
  • Unnecessary redraws of our UIViewRepresentable, or even infinite loops
  • Strange behavior where the state and the view are a little bit out of sync

In my testing, these issues are becoming less relevant with UIKit, but are very relevant when dealing with AppKit. My guess is that UIKit components

In this post, I will detail how I moved my data out of 1Password and into iCloud Keychain and use the new Passwords preference pane introduced in macOS Monterey. I have only recently switched from 1Password to iCloud Keychain so this post will not dive into the pros and cons of the two.

I strongly advise against implementing your own crash reporter. It’s very easy to create a basic crash reporter that works well enough to debug simple problems. It’s impossible to implement a good crash reporter, one that’s reliable, binary compatible, and sufficient to debug complex problems. The bulk of this post is a low-level explanation of that impossibility.

Earlier I said “It’s impossible to implement a good crash reporter”, and I want to explain why I’m confident enough in my conclusions to use that specific word. There are two fundamental problems here:

  • On iOS (and the other iOS-based platforms, watchOS and tvOS) your crash reporter must run inside the crashed process. That means it can never be 100% reliable. If the process is crashing then, by definition, it’s in an undefined state. Attempting to do real work in that state is just asking for problems [1].
  • To get good results your crash reporter must be intimately tied to system implementation details. These can change from release to release, which invalidates the assumptions made by your crash reporter. This isn’t a problem for the Apple crash reporter because it ships with the system. However, a crash reporter that’s built in to your product is always going to be brittle.

Have you ever noticed that crash logs sometimes don't make much sense or are missing some symbols? Unlike traditional UIKit applications, Apple does not provide debug symbols (dSYMs) for SwiftUI. This means that any crash containing SwiftUI addresses in the stack trace will not be symbolicated.

We've discovered a way to symbolicate any Apple framework and want to share it with everyone.

Before iOS 17 if you wanted to give haptic feedback to a user from a SwiftUI view you’d use one of the UIKit (or AppKit) feedback generators.

In iOS 17, Apple added a range of sensory feedback view modifiers directly to SwiftUI to play haptic and/or audio feedback.

JFrog now offers the first and only Swift binary package repository, enabling developers to use JFrog Artifactory for resolving Swift dependencies instead of enterprise source control (Git) systems. Swift developers can benefit from Artifactory’s robust binary management and the ways that it contributes to stable and efficient CI/CD, massive scalability, and securing the software supply chain..

In general, the right thing to do depends on why your class is safe:

  • If your class instances can be safely referenced by multiple threads because their stored properties are all immutable lets, your class can just be Sendable.
  • If your class instances can be safely referenced by multiple threads because in practice their stored properties are all immutable, but for some reason (e.g. a complex initialization pattern that completes before the object is shared across threads) some of the properties have to be declared as mutable vars, that is a perfectly reasonable use of @unchecked Sendable. Consider adding some sort of lifecycle assertion to your setters, e.g. a "this is immutable now" flag.
  • If your class instances can be safely referenced by multiple threads because their mutable storage is only actually accessed from a globally-singleton thread, your class should be associated with a global actor.
  • If your class instances can be safely referenced by multiple threads because their mutable storage is only actually accessed under a lock, that is a reasonable use of @unchecked Sendable. This is an important pattern that we're working on safe ways to express.
  • If your class instances can't generally be safely referenced by multiple threads, and in fact they aren't referenced by multiple threads and just get created, used, and destroyed on a single thread, they should not have to be Sendable at all. In this case, it's worth figuring out why you think you need to do so. It's possible that you're actually doing something dangerous, or maybe you've got some funny sort of concurrent context that Swift doesn't understand behaves like an isolated serial executor. Consider if there's an alternative way to express your pattern in a way that Swift will understand.
  • If your class instances can't generally be safely referenced by multiple threads, but instances do need to be moved between threads occasionally, you should not use @unchecked Sendable on the class. Instead, you should suppress the warning locally by "smuggling" the object across the sendability boundary: make a value of an @unchecked Sendable struct that holds the object, then send that instead. This is safe as long as you really do stop using the object in the original context (and only transfer it to one context at a time), and it's much better than pretending that any send is safe. This is a very important pattern that we're actively working on safe ways to express.
  • If your class instances can be safely referenced by multiple threads because their mutable storage is only actually accessed from one of the threads at a time, but that thread isn't globally singleton, the accesses aren't mediated by something like a lock, and you really do maintain active references on multiple threads... I mean, I'm willing to assume that you've got some justification for why this is being done safely, but this seems like a very treacherous pattern, and it's hard to imagine Swift ever finding a way to support it safely. Consider whether you can find a more structured way to express this. If not, you're going to have to just use @unchecked Sendable on the class and accept that you're losing out on concurrency safety.
package arrow.typeclasses

import arrow.Kind
import arrow.core.Either
import arrow.core.Left
import arrow.core.Right
import arrow.core.andThen
import arrow.core.left
import arrow.core.right

/**
 * ank_macro_hierarchy(arrow.typeclasses.Selective)
 */
interface Selective<F> : Applicative<F> {
  fun <A, B> Kind<F, Either<A, B>>.select(f: Kind<F, (A) -> B>): Kind<F, B>

  private fun Kind<F, Boolean>.selector(): Kind<F, Either<Unit, Unit>> =
    map { bool -> if (bool) Unit.left() else Unit.right() }

  fun <A, B, C> Kind<F, Either<A, B>>.branch(fl: Kind<F, (A) -> C>, fr: Kind<F, (B) -> C>): Kind<F, C> {
    val nested: Kind<F, Either<A, Either<B, Nothing>>> = map { it.map(::Left) }
    val ffl: Kind<F, (A) -> Either<Nothing, C>> = fl.map { it.andThen(::Right) }
    return nested.select(ffl).select(fr)
  }

  fun <A> Kind<F, Boolean>.whenS(x: Kind<F, () -> Unit>): Kind<F, Unit> =
    selector().select(x.map { f -> { _: Unit -> f() } })

  fun <A> Kind<F, Boolean>.ifS(fl: Kind<F, A>, fr: Kind<F, A>): Kind<F, A> =
    selector().branch(fl.map { { _: Unit -> it } }, fr.map { { _: Unit -> it } })

  fun <A> Kind<F, Boolean>.orS(f: Kind<F, Boolean>): Kind<F, Boolean> =
    ifS(just(true), f)

  fun <A> Kind<F, Boolean>.andS(f: Kind<F, Boolean>): Kind<F, Boolean> =
    ifS(f, just(false))
}

There is a lot more I wanted to talk about. And I think the most important concept here is understanding how the code translates into trees and how you can then use those trees for state management, for layout, and many other things.

Improve the way you work and strengthen your skills as an experienced mobile engineer.

August

We’ve found success going deep in code sharing across the entire stack throughout our past experiences. As code sharing is simpler in a monorepo, we moved in that direction after many years of painful segregated code sharing practices. amo is now one single monorepo containing all projects and built using the same build system: Bazel (an open-source port of Google Blaze).

Short guide on how to get a generator-apnonce pair for A12+ iOS devices (both jailbroken and non-jailbroken).

One of the interesting things is that we can visualize the safe area using an overlay and a geometry reader. We can add ignoresSafeArea to the geometry reader. Inside the geometry reader, we get access to the size of the safe area insets as well as the safe area size itself.

While it’s common to hear over-simplified rules like “Always use weak references within closures”, writing well-performing and predictable apps and systems often requires a bit more nuanced thinking than that. Like with most things within the world of software development, the best approach tends to be to throughly learn the underlying mechanics and behaviors, and then choose how to apply them within each given situation.

Streaming media apps and long-running apps that send continual updates use an ongoing stream to upload data, rather than sending a single block of data or a flat file. You can configure an instance of URLSessionUploadTask (a subclass of URLSessionTask) to work with a stream that you provide, and then fill this stream with data indefinitely.

The task gets the stream by calling your session’s delegate, so you need to create a session and set your own code as its delegate.

Look up Apple API errors quickly!

Apple classified several APIs that can be misused to access device signals to try to identify the device or user (a.k.a fingerprinting).

The APIs were grouped as follows:

  • File timestamp APIs
  • System boot time APIs
  • Disk space APIs
  • Active keyboard APIs
  • User defaults APIs

The actual list of "required reason API", consisting of UserDefaults, ProcessInfo.systemUptime and many others, can you find here.

Update your existing app to leverage the benefits of Observation in Swift.

Understand the Apple Haptic and Audio Pattern (AHAP) file format.

AHAP is a JSON-like file format that specifies a haptic pattern through key-value pairs, analogous to a dictionary literal, except in a text file. You add an AHAP file to your Xcode project bundle like any other file resource, such as an audio file or an image.

That's enough of a preamble. Let's get into the architecture itself. There will be dedicated articles for every aspect of Puddles and you can find a more thorough and detailed overview here or in the repository. The following is just meant to be a quick collection of the key ideas of the architecture.

Puddles suggests an architecture that separates your code base into 4 distinct layers, each with its own responsibilities and functions, encouraging a modular and maintainable project structure for your app.

This is much more like a workflow. Using strength we can rewrite any (monadic) do expression as a left-to-right workflow, with the cost of having to throw in some applications of strength to carry along all of the captured variables. It's also using a composition of arrows in the Kleisli category.

A monad with a strength function is called a strong monad. Clearly all Haskell monads are strong as I wrote strength to work with any Haskell monad. But not all monads in category theory are strong. It's a sort of hidden feature of Haskell (and the category Set) that we tend not to refer to explicitly. It could be said that we're implicitly using strength whenever we refer to earlier variables in our do expressions.

Explicit module builds are an attempt to move the compilation of textual modules into binary modules out of the Swift compiler instance that imports the module, and up into the build system as an explicit compilation step. The build system is then responsible for scheduling the compilation, checking timestamps on inputs (for incremental builds), and ensuring that all of the binary modules needed by a Swift compilation job have already been built before that compilation job executes.

Explicit module builds are meant to eliminate the problems with implicit module builds, improving parallelism, reducing redundant work among Swift compiler instances, and enabling new technologies such as distributed builds. There are a number of technologies that we are working on in the Swift compilation stack to enable explicit module builds.

Foundation’s URL loading is robust. iOS 7 brought the new URLSession architecture, making it even more robust. However, one thing that it’s never been able to do natively is multipart file uploads.

Let me show you how to create HTTP requests using multipart (form data) body without a third party library. Simple solution.

So, I wanted to talk about why I fell away from Haskell. I should say up front: this is a piece about why I left Haskell, and not about why you should. I don't think people are wrong for using Haskell, or that Haskell is bad. In fact, if I've written this piece the way I hope to write it, I would hope that people read it and come away with a desire to maybe learn Haskell themselves!

Dependent sums and supporting typeclasses for comparing and displaying them

This library defines a dependently-typed finite map type. It is derived from Data.Map.Map in the containers package, but rather than (conceptually) storing pairs indexed by the first component, it stores DSums (from the dependent-sum package) indexed by tag.

I'm finally un-redacting the 6th way, and ironically, it's the app sandbox. I discovered—almost by accident—that a sandboxed app could modify files that it shouldn't be able to modify: files inside the bundle of a notarized app that were supposedly protected by App Management security.

Of course a sandboxed app has somewhat limited access to the file system, although it's notable that the /Applications folder is included within the sandbox. Regardless, the initial extent of the sandbox is not really an issue for an attacker, because a non-sandboxed app can open files in a sandboxed app, thereby extending the latter's sandbox.

To demonstrate the bypass, I've created a sample Xcode project that you can download.

Abstract: A topos is a categorical model of constructive set theory. In particular, the effective topos is the categorical `universe' of recursive mathematics. Among its objects are the modest sets, which form a set-theoretic model for polymorphism. More precisely, there is a fibration of modest sets which satisfies suitable categorical completeness properties, that make it a model for various polymorphic type theories.

The app launch experience is the first impression you make on a user. Every millisecond they wait for your app to start is valuable time they could spend elsewhere. If your app has high engagement and is used multiple times a day then users have to wait for launch over and over. Apple recommends the first frame be drawn in under 400ms. This ensures your app is ready to be used when Springboard’s app open animation finishes.

With only 400ms to spare, developers need to be very careful not to accidentally increase app startup time. However, app launch is such a complicated process with so many moving parts that it’s difficult to know what exactly contributes to it. I started diving deeper into the relationship between app binary size and startup time while working on Emerge, the app size profiler. In this post, I’ll demystify one of the more esoteric aspects of app launch and show you how Swift reference types contribute to the binary size and slower app start times.

Maybe a bit dramatic, but I don't think it would be far from the truth to say that all apps have some dead code. There are quite a few benefits to removing this code. Unsurprisingly, dead code can affect app size and compile time, but excessive dead code also introduces complexity to a codebase and slows developer productivity.

Let's say you're updating a function and need to modify all call sites to work with the new function signature. This time is wasted if the call sites are dead. Dead code inherently increases the line count of a codebase, which is correlated to the number of bugs.

There are even performance implications for having dead code take up memory in your iOS app, which I’ve mentioned in articles about fixups and order files. Emerge helps reduce the complexity of apps and infrastructure, including by finding dead code such as protocols without any conformances. This is done with static analysis.

Reaper, Emerge’s new iOS framework, goes beyond static analysis by detecting unused code at runtime. This extra dead code detection helps to build better, simpler apps.

In this post, we’ll look at what dead code is and how runtime detection expands the amount we can find.

Reaper is an SDK that you can put into your production or alpha users' apps to report which Swift and Objective-C classes were used for each user session. We take those reports and generate a list of all the classes in the binary, for each version of your app. The SDK will detect unused classes within the main binary specifically, not in dynamic frameworks. It's easy to integrate, either as a standalone binary or as a Cocoapod, and adds very little overhead to your app.

Reaper supports iOS 15, 16, and 17. It supports all classes written in Objective-C and most classes written in Swift.

It is tempting to build abstractions so developers have to do less and build more. However, this can easily end up causing frustrations with developers if not done right.

Whenever I take a library or a framework for a test drive, work through their “Getting Started” guide or browse the example code, I try to observe my own reaction. Whenever something makes me go “this feels wrong”, I make note of it and try to distill what I would have done differently. In this blog post, I try to figure out what I think makes a good developer experience.

Be forewarned: Despite its title, this book contains absolutely no helpful advice for obtaining refunds.

Instead, a cast of fictional characters and Ritcher’s alter egos regale you with adventures from his storied life. Almost none of it is true.

You’ll read about how he served in the Belgian National Guard, launched the Burger Chef restaurant chain, hosted the hit TV show Animal Autopsy and spent several years married to Kirsten Dunst during her two terms as governor of Kentucky.

The book is full of secret nuggets, including another book called Presidential Fun Facts & Trivia, information on bird spotting, proper table settings and a full 90 pages of About the Author (which includes a little detour called About the Author’s Dogs.

How to Get Your Fucking Money Back is an absurd, heavily footnoted, 154-page ride that gleefully disregards reality, the rules of storytelling and, occasionally, some common respect for the English language.

Terraform enables you to safely and predictably create, change, and improve infrastructure. This is an open-source fork of Hashicorp's Terraform that keeps the MPL license, following Hashicorp's announcing change of license to BSL. The fork is created and maintained by Digger.dev, an open-source CI runner for IaC.

GRDB is a production-ready database library for Swift, based on SQLite.

It provides raw access to SQL and advanced SQLite features, because one sometimes enjoys a sharp tool. It has robust concurrency primitives, so that multi-threaded applications can efficiently use their databases. It grants your application models with persistence and fetching methods, so that you don't have to deal with SQL and raw database rows when you don't want to.

See Why Adopt GRDB? 103 if you are looking for your favorite database library.

This forum is intended to answer community questions, raise your interest, share stories, experience, and best practices.

In the constantly evolving world of iOS development, SwiftUI has undeniably brought forth a revolution in how we approach interface design. The introduction of the ViewThatFits struct simplifies adaptive layout construction, eradicating the tedious task of handling different screen sizes and frames manually.

By comprehending the nuances between proposed and ideal sizes, developers can leverage the power of SwiftUI to automatically select the most fitting view based on its parent’s dimensions. This not only aids in the creation of more responsive apps but also reduces redundancy in code, promoting cleaner, more maintainable projects.

The notation used to describe type systems varies from presentation to presentation, so giving a comprehensive overview is impossible. However, most presentations share a large, common subset, so this answer will attempt to provide a foundation of enough of the basics to understand variations on the common theme.

Allow each engineer to provision their own "staging" environment using plain Terraform files.

Layerform helps engineers create reusable environment stacks using plain Terraform files (the actual OSS version). Ideal for multiple "staging" environments.

Generates PDF data from the web view’s contents asynchronously.

Compositionality describes and quantifies how complex things can be assembled out of simpler parts.

Compositionality (ISSN 2631-4444) is an open-access, arXiv-overlay journal for research using compositional ideas in any discipline. For more information, see About.

With the Mermaid app, you can easily create and import diagrams with Markdown-inspired syntax. Automate the process of generating complex diagrams without worrying about design and layout.

Key Features:

  • Create diagrams with the popular Mermaid syntax
  • Easily design any kind of diagram like, Flowcharts, Sequence diagrams, ER diagrams and even C4 architecture
  • Copy code from other sources like GitHub or Notion into Miro
  • Design and layout is automatically applied
  • Diagrams are fully editable

Swift 5.5 introduced mechanisms to eliminate data races from the language, including the Sendable protocol (SE-0302) to indicate which types have values that can safely be used across task and actor boundaries, and global actors (SE-0316) to help ensure proper synchronization with (e.g.) the main actor. However, Swift 5.5 does not fully enforce Sendable nor all uses of the main actor because interacting with modules which have not been updated for Swift Concurrency was found to be too onerous. We propose adding features to help developers migrate their code to support concurrency and interoperate with other modules that have not yet adopted it, providing a smooth path for the Swift ecosystem to eliminate data races.

Swift-evolution threads: [Pitch] Staging in Sendable checking, Pitch #2, Pitch #3

graph TD;
classDef facadeCommand fill:#779fae
classDef command fill:#aec6cf
classDef result fill:#cfcfc4 
classDef event fill:#ffb853
classDef domainEvent fill:#ffcb81
classDef integrationEvent fill:#ffdeaf
classDef query fill:#62d862
classDef readModel fill:#77dd77
classDef userInterface fill:#a2e8a2
classDef aggregate fill:#fdfd9d
classDef service fill:#fcfc78
classDef policy fill:#b6a2db
classDef saga fill:#c9bbe5
classDef process fill:#ddd4ee
classDef timer fill:#cfcfc4
classDef person fill:#ffd1dc
classDef system fill:#ffd1dc
classDef comment fill:transparent
 
FacadeCommand:::facadeCommand --> Command:::command
Result:::result --> Event:::event
DomainEvent:::domainEvent --> IntegrationEvent:::integrationEvent
Query:::query --> ReadModel:::readModel
UserInterface:::userInterface --> Aggregate:::aggregate
Service:::service --> Policy:::policy
Saga:::saga --> Process:::process
Timer:::timer --> Person:::person
System:::system --> Comment:::comment
 
Loading
  • Objective-C Internals

    Get ready to dive deep into the inner workings of the Objective-C language and runtime! Each post delves into a specific aspect of the language and explores the details of its implementation. I hope you’ll find this valuable to demystify the language, tackle tricky bugs, and optimize your code for performance.

  • Getting Started with Plugins [SPM]

This guide provides a brief overview of Swift Package Manager plugins, describes how a package can make use of plugins, and shows how to get started writing your own plugins.

When git log encounters a merge commit, it normally follows the history backwards through both parents.

But if we say --first-parent, git log will ignore all of the history in the second parent of a merge commit

Best collaboration platform for Event Storming & Event Modeling

Multi-tool Device for Geeks

Flipper Zero is a portable multi-tool for pentesters and geeks in a toy-like body. It loves hacking digital stuff, such as radio protocols, access control systems, hardware and more. It's fully open-source and customizable, so you can extend it in whatever way you like.

Flipper Zero is a tiny piece of hardware with a curious personality of a cyber-dolphin. It can interact with digital systems in real life and grow while you use it. Explore any kind of access control system, RFID, radio protocols, and debug hardware using GPIO pins

> [!NOTE]  
> Highlights information that users should take into account, even when skimming.

> [!IMPORTANT]  
> Crucial information necessary for users to succeed.

> [!WARNING]  
> Critical content demanding immediate user attention due to potential risks.

SDF does for SQL what Typescript did for Javascript. Faster Development. Trusted Results. Safety at Scale.

This document is a guide to understanding, diagnosing and reporting compilation-performance problems in the swift compiler. That is: the speed at which the compiler compiles code, not the speed at which that code runs.

While this guide is lengthy, it should all be relatively straightforward. Performance analysis is largely a matter of patience, thoroughness and perseverance, measuring carefully and consistently, and gradually eliminating noise and focusing on a signal.

Visit the history versions of TSPL(The Swift Programming Language) with ease.

Compositionality is at the heart of computer science and several other areas of applied category theory such as computational linguistics, categorical quantum mechanics, interpretable AI, dynamical systems, compositional game theory, and Petri nets. However, the meaning of the term seems to vary across the many different applications. This work contributes to understanding, and in particular qualifying, different kinds of compositionality. Formally, we introduce invariants of categories that we call zeroth and first homotopy posets, generalising in a precise sense the π0 and π1 of a groupoid. These posets can be used to obtain a qualitative description of how far an object is from being terminal and a morphism is from being iso. In the context of applied category theory, this formal machinery gives us a way to qualitatively describe the "failures of compositionality", seen as failures of certain (op)lax functors to be strong, by classifying obstructions to the (op)laxators being isomorphisms. Failure of compositionality, for example for the interpretation of a categorical syntax in a semantic universe, can both be a bad thing and a good thing, which we illustrate by respective examples in graph theory and quantum theory.

The Fifth International Conference on Applied Category Theory took place at the University of Strathclyde on 18−22 July 2022, following the previous meetings at Leiden (2018), Oxford (2019), MIT (2020, fully online), and Cambridge (2021). It was preceded by the Adjoint School 2022 (11−15 July), a collaborative research event in which junior researchers worked on cutting-edge topics under the mentorship of experts. The conference comprised 59 contributed talks, a poster session, an industry showcase session, and a session where junior researchers who had attended the Adjoint School presented the results of their research at the school. Information regarding the conference may be found at https://msp.cis.strath.ac.uk/act2022.

Learn how to automate export a Docc archive file using GitHub Actions, and publish it on the internet using GitHub Pages as a static website host.

This proposal introduces function body macros, which are attached macros that can create or augment a function (including initializers, deinitializers, and accessors) with a new body.

In this post you will learn how to configure your Swift development environment for Linux using Dev Containers VSCode extension. This unlocks the ability to build, run, and debug Swift apps on Linux.

Ensure your use of covered API is consistent with policy.

Some APIs that your app uses to deliver its core functionality — in code you write or included in a third-party SDK — have the potential of being misused to access device signals to try to identify the device or user, also known as fingerprinting. Regardless of whether a user gives your app permission to track, fingerprinting is not allowed. Describe the reasons your app or third-party SDK on iOS, iPadOS, tvOS, visionOS, or watchOS uses these APIs, and check that your app or third-party SDK only uses the APIs for the expected reasons.

For each category of required reason API that your app or third-party SDK uses, add a dictionary to the NSPrivacyAccessedAPITypes array in your app or third-party SDK’s privacy manifest file that reports the reasons your app uses the API category. If you use the API in your app’s code, then you need to report the API in your app’s privacy manifest file. If you use the API in your third-party SDK’s code, then you need to report the API in your third-party SDK’s privacy manifest file. Your third-party SDK can’t rely on the privacy manifest files for apps that link the third-party SDK, or those of other third-party SDKs the app links, to report your third-party SDK’s use of required reasons API.

We are excited to share the news of the Lean Focused Research Organization (FRO)! A new nonprofit dedicated to advancing the Formal Mathematics revolution, we aim to tackle the challenges of scalability, usability, and proof automation in the Lean proof assistant. Our 5-year mission is to empower Lean towards self-sustainability.

Implementing push notifications in SwiftUI is a potent tool for boosting user engagement and providing real-time updates. The UNUserNotificationCenter is the hero of our push notification story, facilitating every aspect of notifications from scheduling to user interaction.

The vast variety of authorization options in Swift, such as alert, badge, sound, CarPlay, provisional, critical alert, and providesAppNotificationSettings, allows developers to create a custom user experience, tailoring notifications to specific app needs and user preferences. However, it’s important to implement these options judiciously to ensure a seamless and unobtrusive user experience.

Some of the development of "Set-Theoretic and Type-Theoretic Ordinals Coincide" is carried out but using Gylterud's construction of the cumulative hierarchy 𝕍 as iterative sets, instead of (axiomatically) working with the higher inductive presentation. The type 𝕆 of hereditarily transitive sets is the type of iterative ordinals and corresponds to 𝕍ᵒʳᵈ in the original development Ordinals.CumulativeHierarchy.

In vim mode position the cursor on a word and press * to start to search for that word in the current file.

vim_mode_find_current_word

July

Swift’s FormatStyle and ParseableFormatStyle are the easiest way to convert Foundation data types to and from localized strings. Unfortunately Apple hasn’t done a great job in documenting just what it can do, or how to use them.

Powered by ChatGPT & GPT-4 API

Developer Duck is an AI-powered programming assistant that helps you with your programming tasks. Including features like code suggestions, completion, analysis, and refactoring, Developer Duck is faster than searching the web. Try it for free and put it to the test.

Learn the main concepts of Flutter from a native iOS developer's point of view.

Use SPM to store dependency checkouts in a repository and do it better than CocoaPods

Macros are a mechanism for running JavaScript functions at bundle-time. The value returned from these functions are directly inlined into your bundle.

For small things where you would otherwise have a one-off build script, bundle-time code execution can be easier to maintain. It lives with the rest of your code, it runs with the rest of the build, it is automatically paralellized, and if it fails, the build fails too.

The most familiar, explained in any textbook approach to dealing with variable references in higher-order programs is variable binding environment: an association of variable names with their bound values. In an interpreter (the eval function), the environment is one of its arguments. Compiled functions receive the environment as an extra argument. Terms with variables are commonly thought to mean functions from the environment to the value domain.

This article demonstrates a different, unconventional approach to variable references, coming from the observation that a (let-)bound variable can only be used while the control remains within the corresponding let-expression body. In interpreter terms, a variable may be accessed only while the interpreter is still handling the let-form, which remains on the interpreter stack. Therefore, to find the value associated with the bound variable we search the stack — or, better, just point to the right place on the stack that stores that value.

That is a very attractive idea, which seems first to come to mind when implementing higher-order languages. It was indeed first that came to mind to J. McCarthy and A. Turing. Alas, the idea — now called dynamic binding — is flawed (as Turing soon realized). To regard functions as first-class, to be able to pass them as arguments and return as results, lexical binding is indispensable.

This article demonstrates that, perhaps surprisingly, lexical binding can be implemented via dynamic binding, in strict and lazy settings.

The main advantage of this alternative approach is making variables a modular feature: variables and (first-class) functions can be introduced to a first-order language without any changes to the latter, without any need to re-write the interpretation of the first-order fragment (numbers, strings, etc.) to pass the environment. We may write extensible interpreters and compilers. Another application is the surprisingly simple implementation of staging.

Dynamic binding can be implemented in many ways. One particular implementation, in terms of delimited control, turns out particularly insightful. Incidentally, it demonstrates the need for multi-shot (non-affine) delimited continuations — perhaps the first non-trivial use of such delimited continuations aside from non-determinism.

Another insight is that a let-expression — often considered a mere syntax sugar in lambda calculus — turns out more fundamental than the lambda abstraction.

The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications. This page shows you how to use SAM to deploy Server-side Swift applications to AWS. Each application uses AWS Lambda Functions written in Swift. The functions use the AWS SDK for Swift and the Swift AWS Lambda Runtime.

Seamless and efficient Docker and Linux on your Mac. Glide through your work faster with our Docker Desktop alternative.

If you've ever looked at the SVG code for an icon before, you might have noticed that they're usually made up of a bunch of path elements, each with a cryptic d attribute.

Keep your Mac safe with ClamXAV, the trusted anti-virus and malware scanner for macoS

The Uxn ecosystem is a little personal computing stack, created to host ours tools and games, programmable in its own unique assembly language.

It was designed with an implementation-first mindset with a focus on creating portable graphical applications, the distribution of Uxn projects is akin to sharing game roms for any classic console emulator.

To learn more, read about the uxn design, see the VM specs, or the IO specs.

  • Timely Computation — Conal Elliott

    This paper addresses the question “what is a digital circuit?” in relation to the fundamentally analog nature of actual (physical) circuits. A simple informal definition is given and then formalized in the proof assistant Agda. At the heart of this definition is the timely embedding of discrete information in temporally continuous signals. Once this embedding is defined (in constructive logic, i.e., type theory), it is extended in a generic fashion from one signal to many and from simple boolean operations (logic gates) to arbitrarily sophisticated sequential and parallel compositions, i.e., to computational circuits.

    Rather than constructing circuits and then trying to prove their correctness, a compositionally correct methodology maintains specification, implementation, timing, and correctness proofs at every step. Compositionality of each aspect and of their combination is supported by a single, shared algebraic vocabulary and related by homomorphisms. After formally defining and proving these notions, a few key transformations are applied to reveal the linearity of circuit timing (over a suitable semiring), thus enabling practical, modular, and fully verified timing analysis as linear maps over higher-dimensional time intervals.

    An emphasis throughout the paper is simplicity and generality of specification, minimizing circuit-specific definitions and proofs while highlighting a broadly applicable methodology of scalable, compositionally correct engineering through simple denotations and homomorphisms.

  • Native Plant Finder

Search by zip code to find plants that host the highest numbers of butterflies and moths to feed birds and other wildlife where you live.

The NSItemProvider class in Foundation is a powerful abstraction for making data available across processes that are otherwise isolated from one another. I hope this post can be a one-stop reference for developers who want a solid understanding how item providers work, and how to use the API in a modern way.

Sockets are a way to enable inter-process communication between programs running on a server, or between programs running on separate servers. Communication between servers relies on network sockets, which use the Internet Protocol (IP) to encapsulate and handle sending and receiving data.

Network sockets on both clients and servers are referred to by their socket address. An address is a unique combination of a transport protocol like the Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), an IP address, and a port number.

In this tutorial you will learn about the following different types of sockets that are used for inter-process communication:

  • Stream sockets, which use TCP as their underlying transport protocol
  • Datagram sockets, which use UDP as their underlying transport protocol
  • Unix Domain Sockets, which use local files to send and receive data instead of network interfaces and IP packets.

In each section of this tutorial you will also learn how to enumerate the respective socket types on a Linux system. You’ll examine each type of socket using a variety of command line tools.

SwiftUI is a reactive framework where the data drives the UI. In 2019, I wrote a post detailing how I manage the various forms of data flow through a SwiftUI app, and with the help of others in the community, I iterated over this until I had a good understanding of the concepts and which methods you should use when. In 2021, I updated the post to cover the minor changes, but there have been no major modifications since then.

At WWDC 2023, things changed a lot! With the introduction of Swift macros, the SwiftUI team was able to reduce the number of property wrappers need to send data around, and remove a lot of boilerplate code.

June

  • One-shot Algebraic Effects in Swift

    You could build an even closer analog in terms of async, since with*Continuation gives you a delimited continuation for the task (albeit a one-shot continuation, unlike the multi-shot continuations that "pure" algebraic effect systems provide in functional languages). You could for example store the effect handler in task-local state, and have the effect operations be functions that use withCheckedContinuation to access the current handler and resume the task with the result:

class State {
  var value: Int
}

@TaskLocal var stateHandler: State?

// Run the block with a state handler
func with(state: State, _ body: () async throws -> R) rethrows -> R {
  stateHandler = state
  defer { stateHandler = nil }
  return try await body()
}

// Access the current state
var stateValue: Int {
  get async {
    return withCheckedContinuation { cc in
      cc.resume(returning: stateHandler!.value)
    }
  }
}
  • FocusedValue

    A property wrapper for observing values from the focused view or one of its ancestors.

    If multiple views publish values using the same key, the wrapped property will reflect the value from the view closest to focus.

  • Advanced macOS Command-Line Tools

macOS is fortunate to have access to the huge arsenal of standard Unix tools. There are also a good number of macOS-specific command-line utilities that provide unique macOS functionality. To view the full documentation for any of these commands, run man <command>.

LeanDojo is a Python library for learning–based theorem provers in Lean, supporting both Lean 3 and Lean 4. It provides two main features:

  • Extracting data (proof states, tactics, premises, etc.) from Lean repos.
  • Interacting with Lean programmatically.

A value is what you can return from a call, pass as a (non-inout) argument, and so on. Ignoring reference types for a second, you can talk about values independently of concepts like memory. Fundamental types can be thought of as fundamental values, like particular integers and strings, and structs can broken down recursively into the component values they store in their stored properties. For example, I might say that a particular value is Ball(diameter: .03, color: Color.orange). Here I've written the value as if I were calling a memberwise initializer with all the values of the stored properties; this works to denote the value even if I didn't actually build it that way, or even if my type doesn't actually have a memberwise initializer.

A location is part of the memory of the abstract machine. Every location has a type, and it stores a value of its type. For example, when you declare a mutable local variable, a new location is created dynamically when that variable comes into scope, and it is destroyed when the variable goes out of scope (and all the captures of it go away). Creating a location of a struct type means creating locations for all the stored properties of that struct.

AI is fundamentally changing the way we live, work, and build software. It has the potential to be the biggest platform shift since the iphone and mobile.

With mobile we learned the painful lesson of the Apple app-store, controlled by a single monopolistic company, stifling innovation and entrepreneurship.

AI, our new platform, needs it's own app-store. An unrestricted app-store built upon the open web and the OpenAPI specification.

We engineers currently have a slim chance of creating this app-store layer before some large corporation does it. We must seize this chance.

That's why we're building openpm.ai, an open source package-manager for OpenAPI files. AIs can use consume packages from openpm in a similar fashion to how ChatGPT plugins work. Ultimately, AIs can use openpm to discover and interact with the world via APIs.

Everything we release is under the MIT license. We will never charge a transaction fee for our services. We will never wield editorial control. We will only remove packages that are scams or illegal under US law. At any point you can choose to export all of our packages and run them on your own server.

Design is not just a way to make sense of the world; design helps the world make sense. It takes fierce optimism to face challenges and see them as opportunities for learning, growth and change.

When people wear Apple Vision Pro, they enter an infinite 3D space where they can engage with your app or game while staying connected to their surroundings.

As you begin designing your app or game for visionOS, start by understanding the fundamental device characteristics and patterns that distinguish the platform. Use these characteristics and patterns to inform your design decisions and help you create immersive and engaging experiences.

Space. Apple Vision Pro offers a limitless canvas where people can view virtual content like windows, volumes, and 3D objects, and choose to enter deeply immersive experiences that can transport them to different places.

Immersion. In a visionOS app, people can fluidly transition between different levels of immersion. By default, an app launches in the Shared Space where multiple apps can run side-by-side and people can open, close, and relocate windows. People can also choose to transition an app to a Full Space, where it’s the only app running. While in a Full Space app, people can view 3D content blended with their surroundings, open a portal to view another place, or enter a different world.

Passthrough. Passthrough provides live video from the device’s external cameras, and helps people interact with virtual content while also seeing their actual surroundings. When people want to see more or less of their surroundings, they use the Digital Crown to control the amount of passthrough.

Spatial Audio. Vision Pro combines acoustic and visual-sensing technologies to model the sonic characteristics of a person’s surroundings, automatically making audio sound natural in their space. When an app receives a person’s permission to access information about their surroundings, it can fine-tune Spatial Audio to bring custom experiences to life.

Focus and gestures. In general, people interact with Vision Pro using their eyes and hands. People perform most actions by looking at a virtual object to bring focus to it and making an indirect gesture, like a tap, to activate it. People can also use a direct gesture to interact with a virtual object by touching it with a finger.

Ergonomics. While wearing Vision Pro, people rely entirely on the device’s cameras for everything they see, both real and virtual, so maintaining visual comfort is paramount. The system helps maintain comfort by automatically placing content so it’s relative to the wearer’s head, regardless of the person’s height or whether they’re sitting, standing, or lying down. Because visionOS brings content to people — instead of making people move to reach the content — people can remain at rest while engaging with apps and games.

Accessibility. Apple Vision Pro supports accessibility technologies like VoiceOver, Switch Control, Dwell Control, Guided Access, Head Pointer, and many more, so people can use the interactions that work for them. In visionOS, as in all platforms, system-provided UI components build in accessibility support by default, while system frameworks give you ways to enhance the accessibility of your app or game.

Apple’s initial visionOS design kit for Figma contains a comprehensive set of UI components, views, system interfaces, text styles, color styles, and materials. All of the core ingredients you need to quickly create highly realistic visionOS app designs.

If you have requests, find bugs, or have other feedback for us, please use Feedback Assistant. Select Developer Tools > Apple Design Resources.

Important: Make sure to install the latest version of SF Symbols before using this library.

We propose introducing a pair of new attributes, @inlinable and @usableFromInline. The @inlinable attribute exports the body of a function as part of a module's interface, making it available to the optimizer when referenced from other modules. The @usableFromInline attribute marks an internal declaration as being part of the binary interface of a module, allowing it to be used from @inlinable code without exposing it as part of the module's source interface.

XcodeBenchmark contains a large codebase to measure the compilation time in Xcode.

Was it pairing? TDD? Retros? Or was it that we could write code in peace, without being bitten by possums?

  1. Better code completion for methods with many default parameters.
  2. Context awareness.
  3. Documentation Preview.
  4. Quick Action.
  5. Bookmark.
  6. Format to Multiple Lines.

The fastest way to extract Xcode build settings.

BuildSettingExtractor is a free, open-source utility that extracts build settings from an Xcode project into a set of xcconfig files.

When you’re moving Xcode build settings out of your project file and into xcconfig files, this handy utility makes that initial move a lot easier. It’s also an easy way for the curious to take a look at the build settings in a project without fear of accidentally changing them.

  • WWDC23 #SwiftUI

    Every question + answer in 2023’s #swiftui WWDC Digital Lounge organised for an easy read.

    For brevity, any requests from Apple to file a feedback have not been included in the responses. So, it’s worth emphasising that the SwiftUI team encouraged everyone to file a feedback for any unsupported behaviour, because this helps them prioritise their backlog. So, if you can’t get SwiftUI to do x today, please let them know. When you do, describe your use case so they have a clear idea what you’re trying to accomplish (this will help them understand better the general merit of the feature request).

  • Interpolate text with custom foreground style in SwiftUI

SwiftUI lets us style portions of text by interpolating Text inside another Text and applying available text modifiers, such as foregroundColor() or font().

Starting from iOS 17 we can apply more intricate styling to ranges within a Text view with foregroundStyle().

struct ContentView: View {
    let gradient = LinearGradient(
        colors: [.blue, .green],
        startPoint: .leading,
        endPoint: .trailing
    )
    
    var body: some View {
        Text("Hello, \(Text("world").foregroundStyle(gradient))!")
            .bold()
            .font(.title)
            .textCase(.uppercase)
    }
}

Learn Haskell abstractions the easy way — with real-world examples and context. You'll learn common patterns in Haskell, how to implement them yourself, and what their benefits and drawbacks are.

This short book is meant for anyone who already has a basic working understanding of Haskell, but is looking for intermediate-level knowledge.

This framework provides a means for developers to create format readers and video decoders for media that the system doesn’t natively support.

MediaExtension format readers encapsulate media assets that the system doesn’t natively support so that the system can recognize them. MediaExtension video decoders decode video formats that the system doesn’t natively support. Developers need to build format readers and video decoders as ExtensionKit bundles and embed them in a host app. Once a user installs and runs the host app, the embedded extensions become available to any app on the user’s system that opts in to using them.

  • Improving app responsiveness

    Create a more immediate user experience by removing hangs and hitches from your app’s user interface.

    An app that responds instantly to users’ interactions gives an impression of supporting their workflow. When the app responds to gestures and taps in real time, it creates an experience for users that they’re directly manipulating the objects on the screen. Apps with a noticeable delay in user interaction (a hang) or movement on screen that appears to jump (a hitch), shatter that illusion. This leaves the user wondering whether the app is working correctly. To avoid hangs and hitches, keep the following rough thresholds in mind as you develop and test your app. 100 ms is the threshold for delays in direct user interaction. If a delay in user interaction becomes longer than 100 ms, it starts to become noticeable and causes a hang. A shorter delay is rarely noticeable. 5 ms is the threshold to achieve fluid motion on-screen. For fluid, uninterrupted motion, a new frame needs to be ready whenever the screen updates. On Apple devices, this can be as often as 120 times per second, or every 8.3 ms. Depending on system conditions and other work that your app performs, you might not have the full 8.3 ms to prepare your next screen update. If the work that your app needs to perform to update the screen is less than 5 ms, the update is usually ready in time. If it takes longer, you need to take a closer look at the specific devices you’re targeting and the display refresh rate your app needs to support. This article describes several best practices to help you avoid introducing hangs and hitches in your app, as well as multiple tools to help you detect and analyze these types of responsiveness issues.

  • Beyond the basics of structured concurrency

It's all about the task tree: Find out how structured concurrency can help your apps manage automatic task cancellation, task priority propagation, and useful task-local value patterns. Learn how to manage resources in your app with useful patterns and the latest task group APIs. We'll show you how you can leverage the power of the task tree and task-local values to gain insight into distributed systems. Before watching, review the basics of Swift Concurrency and structured concurrency by checking out “Swift concurrency: Behind the scenes” and “Explore structured concurrency in Swift” from WWDC21.

Discover how you can use Swift macros to make your codebase more expressive and easier to read. Code along as we explore how macros can help you avoid writing repetitive code and find out how to use them in your app. We'll share the building blocks of a macro, show you how to test it, and take you through how you can emit compilation errors from macros.

Discover how Swift macros can help you reduce boilerplate in your codebase and adopt complex features more easily. Learn how macros can analyze code, emit rich compiler errors to guide developers towards correct usage, and generate new code that is automatically incorporated back into your project. We'll also take you through important concepts like macro roles, compiler plugins, and syntax trees.

We’ve learned a lot from building other compiler and programming language systems (e.g., Clang/C++, Swift, etc) over the last 20+ years. From that experience, we are building Mojo to:

  • Be a fully compatible superset of Python, benefiting from its easy to read and understandable syntax and enabling its large community of developers to already know how to write Mojo!
  • Support system programming features and hardware accelerators that extend the performance and reach of Python into new domains as we move into a new parallel-computing world.
  • Be fully integrated with the existing Python ecosystem, extending and benefiting from all of the existing packages. We will also build seamless C and C++ interoperability to lift (and benefit from) work in those communities over time.
  • Provide a new high-performance heterogeneous compiler and runtime implementation that benefits from state-of-the-art techniques. > > As a consequence, we believe Mojo fits the perfect sweet spot for LLMs to generate and output highly scalable code, because it combines the human readability and usability of Python, but extends it with powerful lower-level systems features that enable it to scale across more hardware and drive the next set of the world’s applications and use cases. > > We think LLMs will continue to unlock creativity and productivity across many languages, but we also believe Mojo will be well prepared to lift collaborative software development to the next level and bring programming into new frontiers. >

Explore SwiftUI's powerful animation capabilities and find out how these features work together to produce impressive visual effects. Learn how SwiftUI refreshes the rendering of a view, determines what to animate, interpolates values over time, and propagates context for the current transaction.

Define boundaries and act on user location updates.

Simplify location delivery using asynchronous events in Swift.

Create connections between your app’s data model and views.

A SwiftUI app can display data that people can change using the app’s user interface (UI). To manage that data, an app creates a data model, which is a custom type that represents the data. A data model provides separation between the data and the views that interact with the data. This separation promotes modularity, improves testability, and helps make it easier to reason about how the app works.

Keeping the model data (that is, an instance of a data model) in sync with what appears on the screen can be challenging, especially when the data appears in multiple views of the UI at the same time.

SwiftUI helps keep your app’s UI up to date with changes made to the data thanks to Observation. With Observation, a view in SwiftUI can form dependencies on observable data models and update the UI when data changes.

Learn how you can build a mental model for performance in SwiftUI and write faster, more efficient code. We'll share some of the common causes behind performance issues and help you triage hangs and hitches in SwiftUI to create more responsive views in your app.

  • Apple Design Resources — iOS 17 and iPadOS 17

    Apple’s first official design kit for Figma contains a comprehensive set of components, views, system interfaces, text styles, color styles, materials, and layout guides. All the core ingredients you need to quickly create highly realistic iOS and iPadOS apps designs.

    Some key features include:

    • Comprehensive set of components, from Alerts to Widgets and everything in between
    • Home Screen and Lock Screen widget templates
    • Notification design templates
    • Templates for tabbed apps, parent / child apps, split views, and sheets
    • Full dynamic type chart with accessibility sizes
    • Built in iOS system colors, materials, text styles and vibrancy effects
  • Faster sorting algorithms discovered using deep reinforcement learning

Fundamental algorithms such as sorting or hashing are used trillions of times on any given day1. As demand for computation grows, it has become critical for these algorithms to be as performant as possible. Whereas remarkable progress has been achieved in the past2, making further improvements on the efficiency of these routines has proved challenging for both human scientists and computational approaches. Here we show how artificial intelligence can go beyond the current state of the art by discovering hitherto unknown routines. To realize this, we formulated the task of finding a better sorting routine as a single-player game. We then trained a new deep reinforcement learning agent, AlphaDev, to play this game. AlphaDev discovered small sorting algorithms from scratch that outperformed previously known human benchmarks. These algorithms have been integrated into the LLVM standard C++ sort library3. This change to this part of the sort library represents the replacement of a component with an algorithm that has been automatically discovered using reinforcement learning. We also present results in extra domains, showcasing the generality of the approach.

New algorithms will transform the foundations of computing

Digital society is driving increasing demand for computation, and energy use. For the last five decades, we relied on improvements in hardware to keep pace. But as microchips approach their physical limits, it’s critical to improve the code that runs on them to make computing more powerful and sustainable. This is especially important for the algorithms that make up the code running trillions of times a day.

In our paper published today in Nature, we introduce AlphaDev, an artificial intelligence (AI) system that uses reinforcement learning to discover enhanced computer science algorithms – surpassing those honed by scientists and engineers over decades.

AlphaDev uncovered a faster algorithm for sorting, a method for ordering data. Billions of people use these algorithms everyday without realising it. They underpin everything from ranking online search results and social posts to how data is processed on computers and phones. Generating better algorithms using AI will transform how we program computers and impact all aspects of our increasingly digital society.

By open sourcing our new sorting algorithms in the main C++ library, millions of developers and companies around the world now use it on AI applications across industries from cloud computing and online shopping to supply chain management. This is the first change to this part of the sorting library in over a decade and the first time an algorithm designed through reinforcement learning has been added to this library. We see this as an important stepping stone for using AI to optimise the world’s code, one algorithm at a time.

An interface, consisting of a label and additional content, that you display when the content of your app is unavailable to users.

A type that performs tasks for clients across process boundaries.

Read contactless physical and digital wallet cards using your iPhone.

The ProximityReader framework supports Tap to Pay on iPhone, which allows a person’s iPhone to act as a point-of-sale device without additional hardware. ProximityReader also supports the reading of loyalty cards from the Wallet app. Use this framework to initiate the payment process from your app.

The use of this framework requires you to coordinate with a participating payment service provider that is Level 3 certified. Contact your payment provider and work with them to set up a workflow for handling payments. When you’re ready, contact Apple and request the entitlement you need to integrate Tap to Pay on iPhone support into your app. For information on requesting this entitlement, see Setting up the entitlement for Tap to Pay on iPhone.

Interact with accessories that track subjects on camera as they move around.

Apple’s library technology has a long and glorious history, dating all the way back to the origins of Unix. This does, however, mean that it can be a bit confusing to newcomers. This is my attempt to clarify some terminology.

Highlights of new technologies introduced at WWDC23.

Shorten compile times by reducing the number of symbols your code exports and by giving the compiler the explicit information it needs.

Tell the Xcode build system about your project’s target-related dependencies, and reduce the compiler workload during each build cycle.

A detailed list of individual Xcode build settings that control or change the way a target is built.

Use mergeable dynamic libraries to get app launch times similar to static linking in release builds, without losing dynamically linked build times in debug builds.

For those who don’t follow Swift’s development, ABI stability has been one of its most ambitious projects and possibly it’s defining feature, and it finally shipped in Swift 5. The result is something I find endlessly fascinating, because I think Swift has pushed the notion of ABI stability farther than any language without much compromise.

  • Generalize APIs with parameter packs

    Swift parameter packs are a powerful tool to expand what is possible in your generic code while also enabling you to simplify common generic patterns. We'll show you how to abstract over types as well as the number of arguments in generic code and simplify common generic patterns to avoid overloads. To get the most out of this session, we recommend first checking out “Embrace Swift generics" from WWDC22.

  • Meet mergeable libraries

    Discover how mergeable libraries combine the best parts of static and dynamic libraries to help improve your app's productivity and runtime performance. Learn how you can enable faster development while shipping the smallest app. We'll show you how to adopt mergeable libraries in Xcode 15 and share best practices for working with your code.

  • Debug with structured logging

    Discover the debug console in Xcode 15 and learn how you can improve your diagnostic experience through logging. Explore how you can navigate your logs easily and efficiently using advanced filtering and improved visualization. We'll also show you how to use the dwim-print command to evaluate expressions in your code while debugging.

  • Xcode 15 Beta Release Notes

    Xcode 15 beta includes SDKs for iOS 17, iPadOS 17, tvOS 17, watchOS 10, and macOS 14. The Xcode 15 beta release supports on-device debugging in iOS 12 and later, tvOS 12 and later, and watchOS 4 and later. Xcode 15 beta requires a Mac running macOS Ventura 13.3 or later.

  • SwiftData

    Write your model code declaratively to add managed persistence and automatic iCloud sync.

    Combining Core Data’s proven persistence technology and Swift’s modern concurrency features, SwiftData enables you to add persistence to your app quickly, with minimal code and no external dependencies. Using modern language features like macros, SwiftData enables you to write code that is fast, efficient, and safe, enabling you to describe the entire model layer (or object graph) for your app. The framework handles storing the underlying model data, and optionally, syncing that data across multiple devices.

    SwiftData has uses beyond persisting locally created content. For example, an app that fetches data from a remote web service might use SwiftData to implement a lightweight caching mechanism and provide limited offline functionality.

    SwiftData is unintrusive by design and supplements your app’s existing model classes. Attach the Model macro to any model class to make it persistable. Customize the behavior of that model’s properties with the Attribute(_:renamingIdentifier:hashModifier:) and Relationship(::renamingIdentifier:inverse:hashModifier:) macros. Use the ModelContext class to insert, update, and delete instances of that model, and to write unsaved changes to disk.

    To display models in a SwiftUI view, use the Query property wrapper and specify a predicate or fetch descriptor. SwiftData performs the fetch when the view appears, and tells SwiftUI about any subsequent changes to the fetched models so the view can accordingly. You can access the model context in any SwiftUI view using the modelContext environment value, and specify a particular model container or context for a view with the modelContainer(_:) and modelContext(_:) view modifiers.

    As your app’s model layer evolves, SwiftData performs automatic migrations of the underlying model data so it remains in a consistent state. If the aggregate changes between two versions of the model layer exceed the capabilities of automatic migrations, use Schema and SchemaMigrationPlan to participate in those migrations and help them complete successfully.

  • Backyard Birds: Building an app with SwiftData and widgets

Create an app with persistent data, interactive widgets, and an all new in-app purchase experience.

Backyard Birds offers a rich environment in which you can watch the birds that visit your backyard garden. You can monitor their water and food supply to ensure they always have fresh water and plenty to eat, or upgrade the game using in-app purchase to provide tastier food for the birds to eat.

The sample implements its data model using SwiftData for persistence, and integrates seamlessly with SwiftUI using the Observable protocol. The game’s widgets implement App Intents for interactive and configurable widgets. The in-app purchase experience uses the ProductView and SubscriptionStoreView from StoreKit. You can access the source code for this sample on GitHub.

Defines and implements conformance of the Observable protocol.

Make responsive apps that update the presentation when underlying data changes.

Observation provides a robust, type-safe, and performant implementation of the observer design pattern in Swift. This pattern allows an observable object to maintain a list of observers and notify them of specific or general state changes. This has the advantages of not directly coupling objects together and allowing implicit distribution of updates across potential multiple observers.

The Observation frameworks provides the following capabilities:

  • Marking a type as observable
  • Tracking changes within an instance of an observable type
  • Observing and utilizing those changes elsewhere, such as in an app’s user interface

To declare a type as observable, attach the Observable macro to the type declaration. This macro declares and implements conformance to the Observable protocol to the type at compile time.

Use macros to generate repetitive code at compile time.

Swift macros help you avoid writing repetitive code in Swift by generating that part of your source code at compile time. Calling a macro is always additive: The macro adds new code alongside the code that you wrote, but never modifies or deletes code that’s already part of your project.

Many libraries provide macros, including the Swift standard library and many frameworks. You can also write your own macros.

Because macros generate Swift code, you use the same tools for development and debugging, regardless of whether your code uses macros

A structure that creates an unfair lock.

Unfair locks are low-level locks that block efficiently on contention. They’re useful for protecting code that loads stored resources. However, it’s unsafe to use os_unfair_lock from Swift because it’s a value type and, therefore, doesn’t have a stable memory address. That means when you call os_unfair_lock_lock or os_unfair_lock_unlock and pass a lock object using the & operator, the system may lock or unlock the wrong object.

Instead, use OSAllocatedUnfairLock, which avoids that pitfall because it doesn’t function as a value type, despite being a structure. All copied instances of an OSAllocatedUnfairLock control the same underlying lock allocation.

I'm not claiming that anything I say in this post is novel. It definitely shares various aspects of well-known software engineering or management practices. I'm just sharing the way I approach the larger technical work that I do and why I do it this way.

In software, quality isn’t just about whether the product “works” or is “performant”. Quality is about how easy it is to add new features, and how effectively new team members can understand and inherit the code. Do the abstractions you’ve introduced make sense within the domain? Is the complexity you’ve introduced through your abstractions actually justified by the problems it solves? Or have you merged groups of functionality together into massive core classes simply to remove the amount of repeated lines of code, regardless of whether those lines may need to diverge in the future? Despite our primal urges to DRY (don’t-repeat-yourself) up our code, repeated code is not itself a sin. If two pieces of repeated logic always change in tandem, then they should be unified. If two pieces of code change independently but happen right now to have the same logic, then they should not be unified.

The guiding principle of an abstraction should always be “Does this make the code easier to work with and understand?” The introduction of complexity is only ever justified if it solves for even greater complexity.

Here are some posts I’ve been collecting since iOS 16 and macOS 13. Hopefully they will soon be outdated.

Apple released the Network framework in iOS 12, macOS 10.14. It includes a NWPathMonitor that is now the preferred way to monitor changes to network status. The three steps to monitor network changes:

  1. Create a NWPathMonitor.
  2. Call the start method of the path monitor passing a queue that will receive path change events.
  3. Receive path changes in the pathUpdateHandler.

Let’s take a look at two core techniques that can help us avoid AnyView while still enabling us to work with multiple view types in very dynamic ways.

May

Anyone with a good idea can help shape the future features and direction of the language. To reach the best possible solution to a problem, we discuss and iterate on ideas in a public forum. Once a proposal is refined and approved, it becomes a release goal, and is tracked as a feature of an upcoming version of Swift.

To support this process, the Swift Evolution repository collects the goals for the upcoming major and minor releases (as defined by the core team) as well as proposals for changes to Swift. The Swift evolution process document details how ideas are proposed, discussed, reviewed, and eventually accepted into upcoming releases.

Below is a list of all the current and upcoming proposal reviews.

Functional logic languages have a rich literature, but it is tricky to give them a satisfying semantics. In this paper we describe the Verse calculus, VC, a new core calculus for functional logical programming. Our main contribution is to equip VC with a small-step rewrite semantics, so that we can reason about a VC program in the same way as one does with lambda calculus; that is, by applying successive rewrites to it.

Beginning in Swift 5.8 you can flexibly adopt upcoming Swift features using a new compiler flag and compilation condition. This post describes the problem upcoming feature flags solve, their benefits, and how to get started using them in your projects.

Requirements of Value Semantic Types

When we say “type X has value semantics,” we mean:

  • Each variable of type X has an independent notional value.

  • A language-level copy (e.g., let b = a) of a variable of type X has an equivalent value.

  • Given a local variable a of type X, safe code cannot observe the value of a except via an expression that uses a.

  • Given a variable a of type X, safe code cannot alter the value of a except by one of the following means applied to a or to a property of a that reflects all or part of a's value.

    • assignment.
    • invocation of a mutating method.
    • invocation of a mutating accessor of a property or subscript
    • passing the expression as an inout parameter.
  • Concurrent access to the values of distinct variables of type X cannot cause a data race.

All told, here’s the general procedure for defunctionalization:

  1. Collect all functions passed as an argument to the filter function.
  2. Create a data type, with one variant for each possible function, each with fields to store the free variables referenced by the corresponding function.
  3. Replace the invocation of the filter condition with an apply function, which determines what filter condition the data structure represents, and executes it.

To be able to send remote push notifications to an iOS simulator in Xcode 14, we have to be running macOS 13 on a computer with an Apple silicon or a T2 processor. In this setup, the simulator generates a unique registration token, which is specific to the combination of the simulator and the Mac hardware it’s running on.

The simulator supports the Apple Push Notification Service (APNS) Sandbox environment, which means that we have to connect to api.sandbox.push.apple.com to send a notification to the simulator.

The new Layout protocol in iOS 16 lets us place views explicitly, and unlike the position() modifier, we can specify an anchor point when we call the place() method in placeSubviews().

You can use the GitHub API to trigger a webhook event called repository_dispatch when you want to trigger a workflow for activity that happens outside of GitHub. For more information, see "Repositories."

An xcframework is a library distribution bundle

More precisely, an xcframework is a universal, binary, library distribution format. Let’s break that description down in reverse order.

An xcframework is a library distribution format. Each xcframework holds exactly one library. A library is a precompiled collection of code that can be consumed by another project to create an executable (or app).

An xcframework is a binary distribution format. That means it does not include source code in its distribution. Only built binaries and interface specifications (headers and/or Swift interface files) are included.

An xcframework is a universal distribution format. That means it holds libraries and interfaces for different platforms as well as processor architectures in the same structure. A single xcframework can, for example, offer the same library for consumption for iOS, watchOS, and Mac projects using either Intel or ARM architectures.

Finally, an xcframework is a bundle because it’s a directory with a well-defined content structure and file extension, and has an Info.plist file in its root. Examining its Info.plist file shows that it has a CFBundlePackageType of XFWK.

Turns out this is trickier than it seems, as UIPanGestureRecognizer has a small delay in startup where it requires you to move your finger before it recognizes the gesture starting. If you just touch your finger on the moving object, that’s not technically a “pan”, so it ignores you (makes sense), which means the object just keeps moving until you move enough to trigger a “pan”, the result of this is an interaction that doesn’t feel very responsive.

Icarus provides first-class language support for Swift, C, C++, and Objective-C.

If you are Swift or C-family language developer, Icarus can provide you with first-class support for building client- and server-side applications and frameworks.

✨ Fun fact: This extension's debugging support was built entirely using Nova and Icarus. "Look ma, no Xcode!"

This is a list of changes to the Swift language that are frequently proposed but that are unlikely to be accepted. If you're interested in pursuing something in this space, please familiarize yourself with the discussions that we have already had. In order to bring one of these topics up, you'll be expected to add new information to the discussion, not just to say, "I really want this" or "this exists in some other language and I liked it there".

Additionally, proposals for out-of-scope changes will not be scheduled for review. The readme file identifies a list of priorities for the next major release of Swift, and the dashboard includes a list of changes that have been rejected after a formal review.

Several of the discussions below refer to "C family" languages. This is intended to mean the extended family of languages that resemble C at a syntactic level, such as C++, C#, Objective-C, Java, and Javascript. Swift embraces its C heritage. Where it deviates from other languages in the family, it does so because the feature was thought actively harmful (such as the pre/post-increment ++) or to reduce needless clutter (such as ; or parentheses in if statements).

I agree it's not very relevant in this case, but I have measured this 🙂 adding Swift to an otherwise-empty (just sleep()s in main) C process on my Darwin system adds 128kB of total dirty memory, of which 16kB is heap memory coming from 28 additional allocations.

This will vary depending on system libraries, symbol ordering, memory allocator, and other factors of course. My particular system is likely measuring a bit on the high side at the moment for unrelated reasons.

(edited to correct numbers slightly, I forgot I had edited my test program, and reverting the edits reduced it from 176kB to 128kB and from 37 allocations to 28)

  • Stop using floats
    • BINARY DATA WAS NOT SUPPOSED TO HAVE DECIMAL PARTS
    • YEARS OF COMPILER DEVELOPMENT yet NO REAL-WORLD USE FOUND for anything other than char and int
    • Wanted to use decimal numbers anyway for a laugh? We had a tool for that: It was called FIXED-POINT ARITHMETIC
    • 'x==x can be FALSE', 'j is a number', 'the sum of t and 7 is 0.30000000004'-statements dreamt up by the utterly Deranged floats
  • Tips and tricks for exploring a new codebase

When you join a new team, it’s tempting to keep your head down and study your new codebase. In your head, you might think that you’re expected to already know everything about the codebase even though you’re completely new to the project.

You might think that all patterns and practices in the project are industry standard and that you just haven’t worked in places as good as this one before.

All of these kinds of ideas exist in pretty much anybody’s head and they prevent you from properly learning and exploring a new codebase.

In this post, you have learned some tips about why human interaction is extremely important during your exploration phase. You also learned some useful tips for the more technical side of things to help you effectively tackle learning a new codebase.

Good luck on your next adventure into a new codebase!

At Airbnb, we run a comprehensive suite of continuous integration (CI) jobs before each iOS code change is merged. These jobs ensure that the main branch remains stable by executing critical developer workflows like building the iOS application and running tests. We also schedule jobs that perform periodic tasks like reporting metrics and uploading artifacts.

It’s not necessarily a fair comparison as whilst you might expect them to be the same, the DeviceDiscoveryUI framework has a number of restrictions:

  • It only works on tvOS (so you can’t communicate between an Apple Watch and an iPad like Apple Fitness can)
  • It only works on Apple TV 4K (Apple Fitness can work with Apple TV HD)
  • The tvOS app can only connect to one device at a time (i.e. you couldn’t make a game with this that used two iPhones as controllers)
  • The tvOS app can only connect to other versions of your app that share the same bundle identifier (and are thus sold with Universal Purchase)
  • This will not work on either the tvOS or iOS simulators. You must use physical devices.

SwiftFiddle is an online playground for creating, sharing, and embedding Swift fiddles (little Swift programs that run directly in your browser).

Sometimes, we might want to automatically retry an asynchronous operation that failed, for example in order to work around temporary network problems, or to re-establish some form of connection.

But what if we wanted to implement something similar, but using Swift Concurrency instead? While Combine’s Publisher protocol includes the above retry operator as a built-in API, neither of Swift’s new concurrency APIs offer something similar (at least not at the time of writing), so we’ll have to get creative!

extension Task where Failure == Error {
    @discardableResult
    static func retrying(
        priority: TaskPriority? = nil,
        maxRetryCount: Int = 3,
        retryDelay: TimeInterval = 1,
        operation: @Sendable @escaping () async throws -> Success
    ) -> Task {
        Task(priority: priority) {
            for _ in 0..<maxRetryCount {
                do {
                    return try await operation()
                } catch {
                    let oneSecond = TimeInterval(1_000_000_000)
      let delay = UInt64(oneSecond * retryDelay)
      try await Task<Never, Never>.sleep(nanoseconds: delay)

                    continue
                }
            }

            try Task<Never, Never>.checkCancellation()
            return try await operation()
        }
    }
}

A result type that accumulates multiple errors.

Learning Resources for Mojo 🔥

I want to show you, how we can use Swift’s Types to create modules — Datatypes, UseCases, Features — that will be controlled through a vocabulary defined by their Domain Specific Languages (DSL).

As these vocabularies are finite sets, this kind of coding has proven to enable coding of even complex domains in simple fashions and in very little time — think: hours where conventional coding needs weeks.

  • Sketch
  • PaintCode
  • DetailsPro
  • Drama
  • Principle
  • Origami Studio
  • Judo
  • Kolibri
  • Flinto
  • OmniGraffle
  • Keynote
  • Tumult Hype
  • Play

Exploring declarative domain paradigm

Using a consistent layout that adapts to various contexts makes your experience more approachable and helps people enjoy their favorite apps and games on all their devices.

Arrange views inside built-in layout containers like stacks and grids.

Use layout containers to arrange the elements of your user interface. Stacks and grids update and adjust the positions of the subviews they contain in response to changes in content or interface dimensions. You can nest layout containers inside other layout containers to any depth to achieve complex layout effects.

To finetune the position, alignment, and other elements of a layout that you build with layout container views, see Layout adjustments. To define custom layout containers, see Custom layout. For design guidance, see Layout in the Human Interface Guidelines.

Layers the views that you specify in front of this view.

An overlay is a view drawing on top of another view. And today, we will talk about two interesting use cases of using overlays in SwiftUI. One of them allows us to keep the structural identity of the view, and another one becomes very handy whenever you build custom navigation transitions.

  • Context SDK

    Context SDK leverages machine learning to make optimized suggestions when to upsell an in-app purchase, what type of ad and dynamic copy to display, or predict what a user is about to do in your app, and dynamically change the product flows to best fit their current situation.

  • Connecting the world one conversation at a time

Byrdhouse is a multilingual video conferencing application that helps global teams to communicate and collaborate across 100+ languages with AI-powered real-time translation and meeting notes.

Mojo is a programming language that is as easy to use as Python but with the performance of C++ and Rust. Furthermore, Mojo provides the ability to leverage the entire Python library ecosystem.

Mojo achieves this feat by utilizing next-generation compiler technologies with integrated caching, multithreading, and cloud distribution technologies. Furthermore, Mojo’s autotuning and compile-time meta-programming features allow you to write code that is portable to even the most exotic hardware.

More importantly, Mojo allows you to leverage the entire Python ecosystem so you can continue to use tools you are familiar with. Mojo is designed to become a superset of Python over time by preserving Python’s dynamic features while adding new primitives for systems programming. These new system programming primitives will allow Mojo developers to build high-performance libraries that currently require C, C++, Rust, CUDA, and other accelerator systems. By bringing together the best of dynamic languages and systems languages, we hope to provide a unified programming model that works across levels of abstraction, is friendly for novice programmers, and scales across many use cases from accelerators through to application programming and scripting.

This document is an introduction to the Mojo programming language, fit for consumption by Mojo programmers. It assumes knowledge of Python and systems programming concepts but it does not expect the reader to be a compiler nerd. At the moment, Mojo is still a work in progress and the documentation is targeted to developers with systems programming experience. As the language grows and becomes more broadly available, we intend for it to be friendly and accessible to everyone, including beginner programmers. It’s just not there today.

The Performance Dashboard is where you can compare the performance of standard industry models on Modular’s infrastructure.

Eliminate passwords for your users when they sign in to apps and websites.

Passkeys use iCloud Keychain public key credentials, eliminating the need for passwords. Instead, they rely on biometric identification, such as Touch ID and Face ID in iOS, or a specific confirmation in macOS for generating and authenticating accounts.

As the authenticator, your Apple device generates a unique public-private key pair for every account it creates on a service. The authenticator retains the private key and shares its public key with the server, known as the relying party.

Location-tracking devices help users find personal items like their keys, purse, luggage, and more through crowdsourced finding networks. However, they can also be misused for unwanted tracking of individuals.

Today Apple and Google jointly submitted a proposed industry specification to help combat the misuse of Bluetooth location-tracking devices for unwanted tracking. The first-of-its-kind specification will allow Bluetooth location-tracking devices to be compatible with unauthorized tracking detection and alerts across iOS and Android platforms. Samsung, Tile, Chipolo, eufy Security, and Pebblebee have expressed support for the draft specification, which offers best practices and instructions for manufacturers, should they choose to build these capabilities into their products.

Produce rich API reference documentation and interactive tutorials for your app, framework, or package.

Teach developers your Swift and Objective-C APIs through reference documentation you create from comments in Swift source code, Objective-C header files, and documentation extension files.

Teach developers your Swift and Objective-C APIs through step-by-step, interactive content.

The core premise of “Platform as a Product” is to make explicit the need for a platform (low variability) to exist as a separate system from the customer-facing products (valuable variation**), and requires a long-lived platform team, practices, and budget to support it. Just to dive into this briefly: a platform is designed similarly to the manufacturing production line below — we want a platform to provide consistency and reliability (low variability). Indeed, it’s the consistency and reliability provided by a platform that enables a customer-facing product team to deliver products demonstrating high variation — which means we can rapidly deliver and test new features and changes.

April

  • UIViewController.ViewLoading

    A property wrapper that loads the view controller’s view before accessing the property.

    Use this property wrapper on view controller properties that can be nil before the view controller’s view loads. Wrapping view controller properties this way eliminates crashes that can occur from implicitly defining properties as Optional, and then referencing them before the view controller finishes loading.

  • SCRAPSCRIPT

Scrapscript is best understood through a few perspectives:

  • “it’s JSON with types and functions and hashed references”
  • “it’s tiny Haskell with extreme syntactic consistency”
  • “it’s a language with a weird IPFS thing"

Scrapscript solves the software sharability problem.

Modern software breaks at boundaries. APIs diverge, packages crumble, configs ossify, serialization corrupts, git tangles, dependencies break, documentation dies, vulnerabilities surface, etc.

To make software safe and sharable, scrapscript combines existing wisdom in new ways:

  • all expressions are content-addressible “scraps”
  • all programs are data
  • all programs are “platformed”
  1. Draw some software architecture diagrams
  2. Identify the risks individually
  3. Converge the risks on the diagrams
  4. Review and summarise the risks

Hackett is a statically typed, pure, lazy, functional programming language in the Racket language ecosystem. Despite significant differences from #lang racket, Hackett shares its S-expression syntax and powerful, hygienic macro system. Unlike Typed Racket, Hackett is not gradually typed—it is designed with typed programs in mind, and it does not have any dynamically-typed counterpart.

With Xcode breakpoints you can set up your login credentials during development so you don't have to type them manually every time you run your app.

You use types that conform to the SetAlgebra protocol when you need efficient membership tests or mathematical set operations such as intersection, union, and subtraction. In the standard library, you can use the Set type with elements of any hashable type, or you can easily create bit masks with SetAlgebra conformance using the OptionSet protocol. See those types for more information.

Rather than generating code for the types, we propose instead to compute a compile time type layout, then create a runtime function that interprets the type layout, computing alignments and addresses at runtime to copy or free them.

There's no packaging step, no building containers, or anything like that. You just call a function in the Unison Cloud API to deploy the service. Unison automatically uploads your function and all of its dependencies to Unison Cloud, caching them on the server.

Review dependencies and make them available to Xcode Cloud before you configure your project to use Xcode Cloud.

Develop stunning shared UIs for Android, iOS, desktop, and web.

In the generic parameter list of a generic type, the each keyword declares a generic parameter pack, just like it does in the generic parameter list of a generic function. The types of stored properties can contain pack expansion types, as in let seq and var iter above.

Making responsive apps often requires the ability to update the presentation when underlying data changes. The observer pattern allows a subject to maintain a list of observers and notify them of specific or general state changes. This has the advantages of not directly coupling objects together and allowing implicit distribution of updates across potential multiple observers. An observable object needs no specific information about its observers.

This design pattern is a well-traveled path by many languages, and Swift has an opportunity to provide a robust, type-safe, and performant implementation. This proposal defines what an observable reference is, what an observer needs to conform to, and the connection between a type and its observers.

Display large numbers of repeated views efficiently with scroll views, stack views, and lazy stacks.

Your apps often need to display more data within a container view than there is space for on a device’s screen. Horizontal and vertical stacks are a good solution for repeating views or groups of views, but they don’t have a built-in mechanism for scrolling. You can add scrolling by wrapping stacks inside a ScrollView, and switch to lazy stacks as performance issues arise.

Manage projects across product and support

Real-time tasks, chat, and powerful customization to get everyone on the same page.

Programming is done in a stateful environment, by interacting with a system through a graphical user interface. The stateful, interactive and graphical environment is more important than the programming language(s) used through it. Yet, most research focuses on comparing and studying programming languages and only little has been said about programming systems.

Technical dimensions is a framework that captures the characteristics of programming systems. It makes it possible to compare programming systems, better understand them, and to find interesting new points in the design space of programming systems. We created technical dimensions to help designers of programming systems to evaluate, compare and guide their work and, ultimately, stand on the shoulders of giants.

Understanding real modules is worthwhile even if you never intend to work in a language that has them. Modules are fundamental, and so much of what is done in other languages, from build configurations to dependency injection to the Adapter pattern, are in large part an attempted encoding of things easily expressed with modules.

So, here’s the overall tally of fields and their usefulness in terms of lessons for software design:

  • Type theory: Very useful
  • Program Analysis: Not useful, other than the definition of “abstraction.”
  • Program Synthesis: Old-school derivational synthesis is useful; modern approaches less so.
  • Formal Verification: Mechanized verification is very useful; automated not so much.
  • Category theory: Not useful, except a small subset which requires no category theory to explain.

So, we have a win for type theory, a win for the part of verification that intersects type theory (by dealing with types so fancy that they become theorems), and a wash for everything else. So, go type theory, I guess.

In conclusion, you can improve your software engineering skills a lot by studying theoretical topics. But most of the benefit comes from the disciplines that study how programs are constructed, not those that focus on how to build tools.

Release apps without drowning in process

Codify your app's release cycle, distribute builds with increased confidence, and give visibility to the entire organization

  • Interesting Swift Snippet to create JSON
let neverJSON = Data(#"{"no":"never"}"#.utf8)

A collection implementing a double-ended queue.

Traditionally, side-effects in functional programming are handled using monads. However, David Spivak and I are attempting a different approach.

In this post, we are going to discuss turing machines via a particular construction in Poly, and then show the general framework we have developed. Not all of the puzzle pieces are there yet in order to make this a fully-fledged theory of imperative programming, but we have to walk before we can run.

The Declaration Order Matters

This document describes how to set up a development loop for people interested in contributing to Swift.

swift-inspect is a debugging tool which allows you to inspect a live Swift process to gain insight into the runtime interactions of the application.

swift-inspect uses the reflection APIs to introspect the live process. It relies on the swift remote mirror library to remotely reconstruct data types.

A DistributedActorSystem designed for local only testing.

+10x engineers may be mythical, but -10x engineers exist.

To become a -10x engineer, simply waste 400 engineering hours per week.

A macro that creates a (naive) Free Monad type based on a user-supplied Functor. It uses the traits from the "higher" crate. This macro is a port of the Control.Monad.Free part of the "free" Haskell package by Edward Kmett.

March

A free and chart-filled mini-book on where we are in the Streaming Wars, have been, and will go. From the national and international best-selling author, and former streaming executive, Matthew Ball.

  • Reliably testing code that adopts Swift Concurrency?

    Just wanted to share an update, as we've continued to write more and more tests for code that uses Swift concurrency. Introducing Swift concurrency to a feature continues to be the biggest cause of test flakiness in our experience, but we have managed to reduce flakiness dramatically by cribbing some helpers from swift-async-algorithms, in particular, a "C async support" module, which exposes a global hook override for task enqueuing, which we use to redirect everything to the main, serial executor:

import _CAsyncSupport

@_spi(Internals) public func _withMainSerialExecutor<T>(
  @_implicitSelfCapture operation: () async throws -> T
) async rethrows -> T {
  let hook = swift_task_enqueueGlobal_hook
  defer { swift_task_enqueueGlobal_hook = hook }
  swift_task_enqueueGlobal_hook = { job, original in
    MainActor.shared.enqueue(unsafeBitCast(job, to: UnownedJob.self))
  }
  return try await operation()
}
> Whenever we have a flakey continuous integration failure, we wrap the test with this helper and don't typically have a concurrency-based problem with that test again. As a bonus, the test runs more quickly.

The solution is far from perfect, but has saved us from a ton of pain, and we think it basically makes async code behave more like Combine code (i.e. well-defined "subscription" order), and hence becomes a lot more reliable to test.

I believe it will probably only work for code that does not use custom executors, but that should be the case for most folks right now. We also haven't tried to include this code in library/application code yet, but if Swift doesn't provide a public solution to the problem, we'll likely look into extracting this helper into its own package, which should make it easier to drop into a project.

SwiftUI has a few different modifiers that can change the color of text, such as foregroundColor(_:), foregroundStyle(_:), and tint(_:). They provide different functionality, but sometimes overlap and it can be hard to know for sure which modifier to choose. In this post we will go over some common use cases for all three of the modifiers and see which one suits best for what purpose.

And what it can teach us about SwiftUI’s stack layout algorithm

I have one more thing to say on the relative sizing view modifier from my previous post, Working with percentages in SwiftUI layout. I’m assuming you’ve read that article. The following is good to know if you want to use the modifier in your own code, but I hope you’ll also learn some general tidbits about SwiftUI’s layout algorithm for HStacks and VStacks.

As mobile developers, we face unique challenges when it comes to releasing and managing updates for our apps across different app stores. One of the primary reasons for this difficulty is the scattered and insufficient documentation available, which lacks the necessary level of detail and nuance to provide a clear understanding of the process.

Additionally, the interfaces and tools provided by these stores for managing releases are often opaque and and don't offer much insight into how things work behind the scenes, which further complicates the process.

This reference is a compilation of answers for common and rare situations in an attempt to increase transparency. It is compiled from experience, developer forums, Stack Overflow, and various other sources of developer documentation. We hope contributions from other developers will grow this resource further.

By adding the new compiler flag -enable-upcoming-feature and appending the feature flags we would like to enable to the “Swift Compiler - Custom Flags” section in Xcode, the compiler will enable the selected features for us. For example, if we wanted to enable existential any and strict concurrency checking, we could provide the compiler with this flag: -enable-upcoming-feature ExistentialAny StrictConcurrency. StrictConcurrency here is equivalent to -warn-concurrency, as it exists in Swift 5.7 and earlier.

SwiftUI’s layout primitives generally don’t provide relative sizing options, e.g. “make this view 50 % of the width of its container”. Let’s build our own!

See the design goals of the Verse programming language and its features. Use this section as a reference.

Verse is a programming language developed by Epic Games that you can use to create your own gameplay in Unreal Editor for Fortnite, including customizing your devices for Fortnite Creative.

A short course to introduce Verse to people with no programming experience whatsoever. No. Programming. Experience. Whatsoever. Seriously.

Screenshot recovery utility

Improve navigation behavior in your app by replacing navigation views with navigation stacks and navigation split views.

If your app has a minimum deployment target of iOS 16, iPadOS 16, macOS 13, tvOS 16, or watchOS 9, or later, transition away from using NavigationView. In its place, use NavigationStack and NavigationSplitView instances. How you use these depends on whether you perform navigation in one column or across multiple columns. With these newer containers, you get better control over view presentation, container configuration, and programmatic navigation.

A proxy for the current testing context.

XCTContext provides a way for activities (XCTActivity) to run against the current testing context, either directly in a test case or in custom testing utilities. You can break up long test methods in UI tests or integration tests into activities to reuse, and to simplify results in the Xcode test reports. Use runActivity(named:block:) to run a block of code as a named substep in a test. For more information, see Grouping Tests into Substeps with Activities.

Simplify test reports by creating activities that organize substeps within complex test methods.

There are many good reasons to be critical of code reviews, pull requests, and other processes that seem to slow things down. The lack of trust in co-workers is, however, not one of them.

You can easily be swayed by that argument because it touches something deep in our psyche. We want to be trusted, and we want to trust our colleagues. We want to belong.

The argument is visceral, but it misrepresents the motivation for process. We don't review code because we believe that all co-workers are really North Korean agents looking to sneak in security breaches if we look away.

We look at each other's work because it's human to make mistakes. If we can't all be in the same office at the same time, fast but asynchronous reviews also work.

Create fully immutable types for your domain using Swift

This resource is useful primarily to developers, but may also interest curious technophiles who want to take a peek “behind the curtain” to see how much of the magic just beneath our fingertips is made.

SwiftUI has the mask(alignment:_:) modifier that masks the view using the alpha channel of the given view. The reverse function is not part of SwiftUI though, but can easely be made using a blendmode trick.

extension View {
    @inlinable
    public func reverseMask<Mask: View>(alignment: Alignment = .center, @ViewBuilder _ mask: () -> Mask) -> some View {
        self.mask(
            Rectangle()
                .overlay(alignment: alignment) {
                    mask()
                        .blendMode(.destinationOut)
                }
        )
    }
}

The software value chain is knowledge-based since it is highly dependant on people. Consequently, a lack of practice in managing knowledge as a resource may jeopardise its application in software development. Knowledge-Based Resources (KBRs) relate to employees’ intangible knowledge that is deemed to be valuable to a company’s competitive advantage. In this study, we apply a grounded theory approach to examine the role of KBRs in Agile Software Development (ASD). To this aim, we collected data from 18 practitioners from five companies. We develop the Knowledge-Push theory, which explains how KBRs boost the need for change in ASD. Our results show that the practitioners who participated in the study utilise, as primary strategies, task planning, resource management, and social collaboration. These strategies are implemented through the team environment and settings and incorporate an ability to codify and transmit knowledge. However, this process of codification is non-systematic, which consequently introduces inefficiency in the domain of knowledge resource utilisation, resulting in potential knowledge waste. This inefficiency can generate negative implications for software development, including meaningless searches in databases, frustration because of recurrent problems, the unnecessary redesign of solutions, and a lack of awareness of knowledge sources.

A request that detects barcodes in an image.

“Functional Software Architecture” refers to methods of construction and structure of large and long-lived software projects that are implemented in functional languages and released to real users, typically in industry.

The goals for the workshop are:

  • To assemble a community interested in software architecture techniques and technologies specific to functional programming;
  • To identify, categorize, and document topics relevant to the field of functional software architecture;
  • To connect the functional programming community to the software architecture community to cross-pollinate between the two.

I’ve taught functional programming for years now, each time experimenting with different ways of teaching core concepts. Over time, I’ve collected and converged on simple (but reasonably precise) pedagogical definitions for a range of functional concepts.

Tries are prefix trees, where the key is usually a String.

A trie is a special case of a tree where characters are stored at each node, and a path down the tree represents a word.

Software architecture is always a topic for hot debate, specially when there are so many different choices. For the last 8-12 months, I have been experimenting with MV pattern to build client/server apps and wrote about it in my original article SwiftUI Architecture - A Complete Guide to MV Pattern Approach. In this article, I will discuss how MV pattern can be applied to build large scale client/server applications.

Release your apps that aren’t suited for public distribution as unlisted on the App Store, discoverable only with a direct link. Unlisted apps don’t appear in any App Store categories, recommendations, charts, search results, or other listings. In addition, they can be accessed through Apple Business Manager and Apple School Manager. Apps for partner sales tools, employee resources, or research studies are examples of good candidates for unlisted distribution.

Distribute your app to:

  • Limited audiences (such as part-time employees, franchisees, partners, business affiliates, higher-education students, or conference attendees) through a standard link that’s usable on the App Store and Apple School Manager or Apple Business Manager.
  • Employee-owned devices that aren’t eligible to be managed through Apple School Manager or Apple Business Manager.
  • Managed and unmanaged devices.
  • All regions that are supported by the App Store.

Next year will be the 10-year anniversary of Swift. Moore's Law means we get to have nicer things in the future, like Apple Silicon and the Internet of Things. Yet the more powerful and ubiquitous computing devices become, the more damage can result if our software malfunctions.

Therefore we must continue to improve Swift along its primary goals of simplicity and safety. So lets bring over some new features from the cutting edge of computer science.

  • Petey - Al Assistant

    This is the app previously known as watchGPT. Quickly get answers to your questions or generate longer messages without typing.

    We are excited to introduce Petey your Al assistant app for the Apple Watch! With this app, you can now interact with the famous GPT model right from your wrist.

  • CodableWithConfiguration

A type that can convert itself into and out of an external representation with the help of a configuration that handles encoding contained types.

CodableWithConfiguration is a type alias for the EncodableWithConfiguration and DecodableWithConfiguration protocols. Use this protocol to support codability in a type that can’t conform to Codable by itself, but can do so with additional statically-defined configuration provided by a CodableConfiguration instance.

AttributedString uses this approach to allow an instance to contain arbitrary attributes, including frameworks outside of Foundation or the platform SDK. It does this by including one or more AttributeScope instances, a type that conforms to EncodingConfigurationProviding and DecodingConfigurationProviding. An attribute scope like AttributeScopes.SwiftUIAttributes defines attribute keys, and conforms to AttributeScope to provide configuration instances that know the AttributedStringKey types and their associated Value types. With this type information, an AttributedString can encode all of its attributes, even from frameworks other than Foundation.

The target audience is Swift compiler developers: if you've previously encountered GenericSignatures, GenericEnvironments, SubstitutionMaps and ProtocolConformances and found them slightly mysterious, this is definitely for you. You might also want to take a look if you're interested in programming language design or type systems in general.

A monad is a lax 2-functor from the terminal 2-category 1 to Cat.

const wow: ("hello" | "world") | (string & {}) // wow is of type "hello" or "world" or string.

This weird-looking intersection of string & {} makes it so that the specific strings "hello" and "world" are distinguished from string as a whole type.

The SwiftUI Navigation Bar has no easy way to change its color, that being iOS 13 or iOS 16 I feel that is something clanky and should be easier. The good news is that we have some workarounds that work pretty well.

This project contains a version of Remote.pure.run that is capable of programmatically producing mermaid diagrams to visualize the forked and awaited tasks in a Remote computation.

The hyphen, em dash and en dash are everywhere, but most of us don’t know when or why to use them – and different writers use the dashes in different ways. Let’s figure this out!

A guide to getting you on the IndieWeb

  • Become a citizen of the IndieWeb
  • Publishing on the IndieWeb
  • Federating IndieWeb Conversations

This deserves a longer post.

A language is sound if it doesn't accept programs that it shouldn't. "Shouldn't" is doing a lot of work there, though: it's an inherently open concept, reliant on an external semantic model. In programming languages, we generally say a program shouldn't be accepted if it "won't work", but our definition for "work" is that individual operations will behave like they're expected to, not that the program as a whole will do what the programmer meant for it to do. The latter definition, of course, can't be applied without pulling the user's intent into the semantic model, and any sort of bug in the program (or flaw in the user's understanding) would become an "unsoundness" in the language. (For example, if I wrote a Fibonacci-style function that started with 3,4,..., the language would be unsound if I meant it to be the standard Fibonacci sequence.) In the more standard PL approach, the language designer just has to rigorously define how individual operations are supposed to work.

One way to do that is to give well-defined semantics to all possible combinations of operands, even ones that don't a priori make much sense. This is common in dynamically-typed languages; for example, JavaScript's subscript operator is well-defined even if you use it on an integer. In statically-typed languages, we usually rule out a lot of those cases by defining static preconditions on operations. For example, in Swift, the subscript syntax requires the base operand to statically have a subscript member, and then the dynamic semantics impose a precondition that the base value actually has the type it was type-checked to have. In that world, the language is formally sound as long as it's correctly enforcing that those static preconditions on individual operations can't be violated.

Sometimes we do use "soundness" in a slightly less formal sense: we argue about how users expect basic operations to work. In this informal sense, a language can be unsound if it violates those user expectations even if dynamically everything remains well-defined. For example, Java has covariant object arrays: you can implicitly convert a String[] to Object[], but assignments into the result will fail with an exception if they aren't dynamically Strings. This is not formally unsound in Java because this aspect of the well-typedness of the array assignment syntax is not a static precondition; it's a well-defined dynamic check. Nonetheless, it is very arguably unsound behavior under a higher-level understanding of how assigning into an array element ought to work, because a bunch of things that the language accepts implicitly and without complaint can be combined to dynamically fail with a type error.

But I have a hard time accepting that that could ever apply to dynamic cast. Dynamic casts are an explicit syntax whose entire purpose is to dynamically check whether a value has a particular type. The possibility that that can fail when values don't have that type is inherent to the operation.

This page collects all the familiar navigation patterns for structuring iOS apps, like drill-downs, modals, pyramids, sequences, and more! Think of it as an unofficial bonus chapter for Apple’s Human Interface Guidelines, written by someone who cares deeply about well-crafted user interfaces.

I’m excited to announce two new open source Swift packages: swift-certificates and swift-asn1. Together, these libraries provide developers a faster and safer implementation of X.509 certificates, a critical technology that powers the security of TLS.

The AI product manager that writes your Jira tickets

JiraPT-3 is your team's newest AI-powered Product Manager that uses GPT-3 to write user stories and epics. This is the Chrome extension for Product Managers who want to 10X their output 🤖 or for Developers who want to automate PMs away 🤭

  • When prompted with user story “As a , I want , so that I can ___”, JiraPT-3 will create:
  • A fully-formed description containing context on the user and their goal
  • A clearly-defined set of Acceptance Criteria that outlines the workflow to achieve the user’s goal.

Extend your codebase with custom, interactive blocks.

Build rich documentation, enhance your workflows, and bring your repository to life.

Publish your Swift package privately, or share it globally with other developers.

Making your Swift packages available online enables you to use the support for Swift package dependencies in Xcode. By publishing your Swift packages to private Git repositories, you can manage and integrate internal dependencies across your projects, allowing you to reduce duplicate code and promote maintainability. Publish your packages publicly, and share your code with developers around the world. To get started, you just need a Swift package and an account with a provider of hosted Git repositories.

Edit images and video with async / await in Swift, powered by Metal.

The main value type in AsyncGraphics is a Graphic. It’s like an image, but with various various methods for applying effects and some static methods for creating visuals.

AsyncGraphics also has another value type called Graphic3D. It’s a 3d image, a volume of voxels.

AsyncGraphics on GitHub

The words you choose within your app are an essential part of its user experience.

Whether you're building an onboarding experience, writing an alert, or describing an image for accessibility, designing through the lens of language will help people get the most from your app or game.

The purity of Haskell allows for mathematical reasoning about programs. This not only makes it possible to be more confident in the correctness of our programs but can be used in order to optimize code as well. In fact, the primary Haskell compiler, GHC, uses these guarantees in order to optimize programs. The restrictions imposed by purity turn into properties that programmers and compilers alike may use to their advantage.

While we could attempt to solve that clarity problem with more verbose API naming, the core issue would still remain — that the modifier-based version doesn’t properly show what the resulting view hierarchy will be in this case. So, in situations like the one above, when we’re wrapping multiple siblings within a parent container, opting for a view-based solution will often give us a much clearer end result.

On the flip side, if all that we’re doing is applying a set of styles to a single view, then implementing that as either a “modifier-like” extension, or using a proper ViewModifier type, will most often be the way to go. And for everything in between — such as our earlier “featured label” example — it all really comes down to code style and personal preference as to which solution will be the best fit for each given project.

Control the information you receive from your tests at different stages in the software engineering process by creating and configuring test plans.

Judo is a design and build tool for SwiftUI apps that writes production-ready code for you while you’re designing. Eliminate back-and-forth with developers and free them from unrewarding grunt work.

  1. On your Mac, backup your Safari data — bookmarks, etc. Just in case.
  2. Completely quit Safari on all devices.
  3. Disable Safari syncing in iCloud settings on all devices. Choose the option to delete the data from the device on iOS, but keep the data on your Mac.
  4. Launch Safari on all devices. Bookmarks, etc. should be gone on iOS.
  5. Completely quit Safari on all devices, again.
  6. Reboot all devices.
  7. Re-enable Safari syncing in iCloud settings on all devices.
  8. Launch Safari on your Mac, so it can sync the initial data.
  9. Launch Safari on all iOS devices.

The gist is that LazyVStack is just lazy, but this is not even close to the UICollectionView / UITableView, List (which is still backed by a UIKit UICollectionView). Any of those reuse / cycle views are you need them. LazyVStack just grow in size, forever. So the longer it is, the more you scroll, the slower it get!

Instead of having separate routes for each API endpoint, we could just have a single API endpoint that takes in a huge enum and switches on that. We can then share that enum with the client and they’d automatically be in sync.

In my opinion git submodule is never the right answer. Often, git submodule is the worst answer and any of the following would be better.

Use git subtree

git subtree solves many of the same problems as git submodule, but it does not violate the git data model.

Use a state object as the single source of truth for a reference type that you store in a view hierarchy. Create a state object in an App, Scene, or View by applying the @StateObject attribute to a property declaration and providing an initial value that conforms to the ObservableObject protocol. Declare state objects as private to prevent setting them from a memberwise initializer, which can conflict with the storage management that SwiftUI provides.

SwiftUI creates a new instance of the model object only once during the lifetime of the container that declares the state object. For example, SwiftUI doesn’t create a new instance if a view’s inputs change, but does create a new instance if the identity of a view changes. When published properties of the observable object change, SwiftUI updates any view that depends on those properties, like the Text view in the above example.

Note: If you need to store a value type, like a structure, string, or integer, use the State property wrapper instead.

And, most importantly, the whole team can collaborate in on the design. In the string diagram above the colored segment was added in a quick architecture design sync during a discussion centered around reporting requirements. The flexibility of the drawing gives us the power to make these changes live, but it’s the mathematical formalism that gives the whole team common ground to interpret exactly what those changes mean.

Learn where Unison is headed

Check out what we’re excited about and working on at the moment 😎

The roadmap is of course not an exhaustive list of our efforts and is very much subject to change.

February

Update your app’s architecture build settings to support building macOS, iOS, watchOS, and tvOS apps on Apple silicon.

tl;dr Foundation overloads the pattern matching operator ~= to enable matching against error codes in catch clauses.

Fundamentally, opening an existential means looking into the existential box to find the dynamic type stored within the box, then giving a "name" to that dynamic type. That dynamic type name needs to be captured in a generic parameter somewhere, so it can be reasoned about statically, and the value with that type can be passed along to the generic function being called. The result of such a call might also refer to that dynamic type name, in which case it has to be erased back to an existential type. The After the call, any values described in terms of that dynamic type opened existential type has to be type-erased back to an existential so that the opened type name doesn't escape into the user-visible type system. This both matches the existing language feature (opening an existential value when accessing one of its members) and also prevents this feature from constituting a major extension to the type system itself.

This section describes the details of opening an existential and then type-erasing back to an existential. These details of this change should be invisible to the user, and manifest only as the ability to use existentials with generics in places where the code would currently be rejected. However, there are a lot of details, because moving from dynamically-typed existential boxes to statically-typed generic values must be carefully done to maintain type identity and the expected evaluation semantics.

Provides support for “if” statements with #available() clauses in multi-statement closures, producing conditional content for the “then” branch, i.e. the conditionally-available branch.

Keep control of your news reading with Reeder, RSS reader and read later client in one app, now with support for iCloud syncing.

As an application of the developed system, I present a classical example of using dependent types: vectors parameterized by their length. Since vector lengths now dwell on type-level, we can guarantee statically that, for example, the replicate operation returns a vector of a given length, concat returns a vector whose length is equal to the sum of lengths of two passed vectors, and zip takes two vectors of the same length and returns a vector of that length. The code relies on some rudimentary facilities of Church numerals and pairs, which also serve as a warm-up.

I go back and forth on whether it makes sense for us to seriously write up the implementation of Swift generics. Is it "just engineering"?

The runtime generics implementation might be novel. SPJ published a paper saying it can't be done

The key thing they missed was our "reabstraction" concept. M:N relationship between types and concrete representations is pretty novel

  • git checkout --orphan latest_branch
  • git add -A
  • git commit -am "Initial commit message"
  • git branch -D main
  • git branch -m main
  • git push -f origin main
  • git gc --aggressive --prune=all # remove the old files

In the past, JavaScript errors inside components used to corrupt React’s internal state and cause it to emit cryptic errors on next renders. These errors were always caused by an earlier error in the application code, but React did not provide a way to handle them gracefully in components, and could not recover from them.

This PR introduces a more streamlined way to perform an in-place mutation of a case in an enum when in a testing context. > You should think of XCTModify as a companion to XCTUnwrap as it allows you to safely unwrap a case from an enum, and then further apply a mutation. This helper will be very useful for the navigation tools being built for TCA.

try XCTModify(&result, case: /Result.success) {
  $0 += 1
}

Luckily, Xcode has a solution to this — User Breakpoints! After creating any breakpoint, you can right-click and select: “Move Breakpoint To” > “User” to move it from your project or workspace to user space. After this, you’ll see a shared list of User Breakpoints in every Xcode project you open.

SwiftUI doesn’t support natively shake gestures yet. But it’s easy to implement it in SwiftUI.

We need to create an extension for UIDevice for tracking shaking gestures when happens. Also, we need to create an extension for UIWindow, and override motionEnded method.

extension UIWindow {
  override func motionEnded(_ motion: UIEvent.EventSubtype, with: UIEvent?) {
    guard motion = .motionShake else { return }
    
    NotificationCenter.default.post(name: UIDevice.deviceDidShake, object: nil)
  }
}

Base is a secure, low-cost, developer-friendly Ethereum L2 built to bring the next billion users to web3.

class ViewModel: ObservableObject {
  @Published var count = 0

  enum Progress {
    case didSubscribeToScreenshots, didRespondToScreenshot
  }

  // set to non-nil when testing
  var progress: AsyncChannel<Progress>?

  @MainActor
  func task() async {
    let screenshots = NotificationCenter.default.notifications(
      named: UIApplication.userDidTakeScreenshotNotification
    )
    await progress?.send(.didSubscribeToScreenshots)
    for await _ in screenshots {
      self.count += 1
      await progress?.send(.didRespondToScreenshot)
    }
  }
}

@MainActor
func testBasics() async throws {
  let vm = ViewModel()

  // Install a progress channel into the ViewModel we can monitor
  let vmProgress = AsyncChannel<ViewModel.Progress>()
  vm.progress = vmProgress

  // We get `task.cancel(); await task.value` for free with async let
  async let _ = vm.task()

  XCTAssertEqual(vm.count, 0)

  // Give the task an opportunity to start executing its work.
  let firstProgress = await vmProgress.next()
  XCTAssertEqual(firstProgress, .didSubscribeToScreenshots)

  // Simulate a screen shot being taken.
  NotificationCenter.default.post(
    name: UIApplication.userDidTakeScreenshotNotification, object: nil
  )

  // Give the task an opportunity to update the view model.
  let nextProgress = await vmProgress.next()
  XCTAssertEqual(nextProgress, .didRespondToScreenshot)

  XCTAssertEqual(vm.count, 1)
}
  1. Decompose the problem into its constituent mental pieces
  2. Solve the problem there
  3. Mentally compile the solution into software

This HTML course for web developers provides a solid overview for developers, from novice to expert level HTML.

A Simulator runtime is an embedded OS package that Simulator loads when running your app on a simulated device in Xcode. For example, when you test your app on a simulated iPhone running iOS 16, Simulator loads the iOS 16 Simulator runtime on the simulated device.

To minimize the download size of Xcode, version 14 and later don’t include the Simulator runtimes for watchOS and tvOS. You need the current versions of the Simulator runtimes to build projects and to run the Simulator for those platforms. You can download and install these files when you first launch Xcode, or later from the Xcode run destination, from Xcode Preferences, or from the command line.

Manage the amount of storage that Xcode requires by choosing Xcode > Preferences > Platforms to view the currently installed Simulator runtimes, and removing any that you don’t need.

Support peer-to-peer connectivity and the discovery of nearby devices.

The Multipeer Connectivity framework supports the discovery of services provided by nearby devices and supports communicating with those services through message-based data, streaming data, and resources (such as files). In iOS, the framework uses infrastructure Wi-Fi networks, peer-to-peer Wi-Fi, and Bluetooth personal area networks for the underlying transport. In macOS and tvOS, it uses infrastructure Wi-Fi, peer-to-peer Wi-Fi, and Ethernet.

  • Struct and enum accessors take a large amount of stack space

    We recently found an issue where the compiler was failing to reuse stack space between switch cases, and allocating the stack space necessary for all of the enum payloads and cases' local state even though only one actually executes at a time. You might be running into the same problem.

    Until we fix that issue, one workaround we've found for this issue is to wrap up each case block in an immediately-invoked closure, like:

    switch foo {
     case .bar:
           _ = {
                ...
            }()
     case .bas:
             _ = {
                ...
            }()
    
    

}

> If you see stack size issues even after adopting indirect cases, you might try that to see if it helps.
* [ToolbarTitleMenu](https://developer.apple.com/documentation/swiftui/toolbartitlemenu)
   > The title menu of a toolbar.
* [The Art of Sequential Animations in SwiftUI: Tips, Tricks, and Examples](https://holyswift.app/how-to-do-sequential-animations-in-swiftui)
> Sequential Animations in SwiftUI offer a powerful and intuitive way to create dynamic and engaging user interfaces. By leveraging the power of SwiftUI’s animation system, developers can easily create complex and beautiful animations that add polish and delight to their apps.
> 
> With a few lines of code, animations can be sequenced and coordinated to create more intricate and expressive user experiences, making SwiftUI an excellent choice for building modern, interactive apps.
* [Simulator: Save as GIF](https://xcode.tips/simulator-save-as-gif)
   > ![](https://xcode.tips/assets/50_simulator_save_as_gif.jpg)
   > After recording a video with the simulator, hold down the Control key while clicking the small preview. The simulator opens a menu. Select “Save as Animated GIF”.
* [In Xcode 14.3 you can now see the output from your previews in the console!](https://mobile.twitter.com/SwiftyAlex/status/1626989662353891328)
   > In Xcode 14.3 you can now see the output from your previews in the console!
   >
   > Just select the new previews button and you’ll see every print 🤯
   >
   > Combine this with \_printChanges() and you can debug views without running your app 🕵🏼
* [Dynamic Library Usage Guidelines](https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/DynamicLibraries/100-Articles/DynamicLibraryUsageGuidelines.html)
   > The dynamic loader compatibility functions provide a portable and efficient way to load code at runtime. However, using the functions incorrectly can degrade app performance. This article shows how to correctly load and use dynamic libraries in your apps.
   >
   > Dynamic libraries help to distribute an app’s functionality into distinct modules that can be loaded as they are needed. Dynamic libraries can be loaded either when the app launches or as it runs. Libraries that are loaded at launch time are called _dependent libraries_. Libraries that are loaded at runtime are called _dynamically loaded libraries_. You specify which dynamic libraries your app depends on by linking your app with them. However, it’s more efficient to use dynamic libraries as dynamically loaded libraries instead of dependent libraries. That is, you should open libraries when you’re about to use symbols they export and close them when you’re done. In some cases, the system unloads dynamically loaded libraries when it determines that they aren’t being used.
   >
   > This article uses the word _image_ to refer to an app file or a dynamic library. App binaries contain the app’s code and the code from the static libraries the app uses. The dynamic libraries the app loads at launch time or runtime are separate images.
* [EditKit Pro](https://apps.apple.com/app/id1659984546)
> Elevate your iOS Development game with EditKit Pro — the ultimate Xcode Editor Extension packed with convenient utilities for a more efficient and productive workflow.
* [The Swift Programming Language](https://docs.swift.org/swift-book/documentation/the-swift-programming-language)
> Understand the high-level goals of the language.
* [WebURL Key-Value Pairs](https://gist.github.com/karwa/bb8eb387dac10fd7c0c1fffc020c1c7c#proposed-solution)
> In order to make it easier to read/write key-value pairs from URL components, WebURL 0.5.0 will include a new KeyValuePairs type. The current formParams view will be deprecated and removed in the next minor release.
* [The Art of Sequential Animations in SwiftUI: Tips, Tricks, and Examples](https://holyswift.app/how-to-do-sequential-animations-in-swiftui)
> With a few lines of code, animations can be sequenced and coordinated to create more intricate and expressive user experiences, making SwiftUI an excellent choice for building modern, interactive apps.
* [What Is Copy On Write(COW) In Swift?](https://ishtiz.com/swift/what-is-copy-on-writecow-in-swift)
> Copy-On-Write (COW) is a memory management technique used in Swift programming language to optimize the performance of memory allocation and deallocation operations. In COW, whenever a new instance of a data structure is created, the original data structure is not modified, instead, a new copy of the data structure is created in memory and modifications are made to the new copy. Copy-on-write is a highly used strategy in Swift for optimising memory usage. The main idea of COW is that when multiple callers want to access the resources which are same, you can put them pointing to the same resource. The state of the resource will be maintained until a caller tries to modify its “copy” of the resource. The main advantage is that if a caller never makes any modifications, no true copy need ever be created. Don’t confuse copy on right with [reference](https://ishtiz.com/swift/value-and-reference-types-in-swift-a-deep-dive) type.
* [The Change of Mobile Teams Topology for an Organization](https://medium.com/mobile-app-development-publication/the-change-of-mobile-teams-topology-for-an-organization-d6fb1f6ff75b)
> Optimize the structure of mobile teams to fit the need of the organization in scaling app development.
* [iOS and iPadOS usage](https://developer.apple.com/support/app-store)
> As measured by devices that transacted on the App Store.
* [Composable Styles in SwiftUI](https://movingparts.io/composable-styles-in-swiftui)
> A look at how to compose styles and how to make custom views support composable styles.
* [pointfreeco/swift-clocks](https://github.com/pointfreeco/swift-clocks)
> ⏰ A few clocks that make working with Swift concurrency more testable and more versatile.
* [Creating an XCFramework](https://rhonabwy.com/2023/02/10/creating-an-xcframework)
> The key pieces to know when doing tackling this are embedded in the core of the article: [Creating a multiplatform binary framework bundle](https://developer.apple.com/documentation/xcode/creating-a-multi-platform-binary-framework-bundle):
>
> 1. For a single library, use the `xcodebuild -create-xcframework` command with the `-library` option. There’s also a `-framework` option, but reserve that for when you need to expose multiple static, or dynamic, libraries as a binary deliverable.
> 1. Avoid using dynamic libraries in when you want to support iOS and iOS simulator, as only macOS supports the dynamic linking for these using a Framework structure. Instead, use static libraries.
> 1. Use the `lipo` command to merge libraries when you’re building for x86 and arm architectures, but otherwise **_DO NOT_** combine the static libraries for the different platforms. You’ll want, instead, to have separate binaries for each platform you’re targeting.
> 1. These days, the iOS simulator libraries need to support BOTH x86 and arm64 architectures, so yep — that’s where you use `lipo` to merge those two into a single “fat” static library — at least if you’re targeting the iOS simulator on macOS. Same goes for supporting the macOS platform itself.
> 1. Get to know the codes called “triples” that represent the platforms you’re targeting. In the world of Rust development, three Apple platforms are “supported” without having to resort to nightly development: iOS, iOS simulator, and macOS. The “triples” are strings (yep — no type system here to double-check your work). Triple is ostensibly to support “CPU”, “Vendor”, and “platform” — but like any fairly dynamic type thing, it’s been extended a bit to support “platform variants”.
>
> The triple codes you’ll likely want to care about, and their platforms:
> * x86_64-apple-ios — the original iOS Simulator on an Intel Mac
> * aarch64-apple-ios-sim — the iOS simulator on an M1/arm based Mac.
> * aarch64-apple-ios — iOS and iPadOS (both are only arm architectures)
> * aarch64-apple-darwin — M1/arm based Macs
> * x86_64-apple-darwin — Intel based Macs
* [A Beginner’s Guide to Styling Components in SwiftUI](https://holyswift.app/a-beginners-guide-to-styling-components-in-swiftui)
> In conclusion, the SwiftUI ButtonStyle protocol is a versatile and straightforward tool for customizing the look and feel of your buttons in your iOS app.
>
> Whether you’re a seasoned developer or just starting out, this protocol can help you create buttons that are both functional and visually appealing. So if you’re looking to add a personal touch to your buttons, be sure to check out SwiftUI’s ButtonStyle protocol!
* [Genius by Diagram](https://www.genius.design)
> It understands what you’re designing and makes suggestions that autocomplete your design using components from your design system.
* [**Developer Conferences Agenda**](https://developers.events)
* [Managing a merge queue](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/managing-a-merge-queue)
> You can increase development velocity with a merge queue for pull requests in your repository.
>
> A merge queue can increase the rate at which pull requests are merged into a busy target branch while ensuring that all required branch protection checks pass.
>
> Once a pull request has passed all of the required branch protection checks, a user with write access to the repository can add that pull request to a merge queue.
>
> A merge queue may use GitHub Actions. For more information, see "[GitHub Actions](https://docs.github.com/en/actions)."
>
> The merge queue creates temporary branches with a special prefix to validate pull request changes. The changes in the pull request are then grouped into a `merge_group` with the latest version of the `base_branch` as well as changes ahead of it in the queue. GitHub will merge all these changes into `base_branch` once the checks required by the branch protections of `base_branch` pass.
* [Adding a stretchable header to a SwiftUI ScrollView](https://danielsaidi.com/blog/2023/02/06/adding-a-stretchable-header-to-a-swiftui-scroll-view)
> The `ScrollViewHeader` presented in this post lets you add stretchable headers to your scroll views by just adding your content to this header component.
>
> I have added this view to my newly released [ScrollKit](https://github.com/danielsaidi/ScrollKit) library. You can find the source code [here](https://github.com/danielsaidi/ScrollKit/blob/main/Sources/ScrollKit/ScrollViewHeader.swift). If you decide to give it a try, I’d be very interested in hearing what you think.
* [Styling Components in SwiftUI](https://movingparts.io/styling-components-in-swiftui)
> SwiftUI has a great API for styling views independent of a view’s implementation. In this post, we’ll look at how we can style custom views in the same way.
* [Changing orientation for a single screen in SwiftUI](https://www.polpiella.dev/changing-orientation-for-a-single-screen-in-swiftui)
> Before you go, I want to stress that while this is the only workaround that we were able to find, it is by no means a robust and future-proof solution. We have found that navigation behaviour in SwiftUI tends to change in every iOS version and changing a single screen from portrait to landscape orientation works well on iOS 16 but not on iOS 15, where you'll probably want to set the orientation to allow `.allButUpsideDown` rather than constraining it to `.landscape` only.
>
> For this reason, I would take what has been discussed in this article **with a big pinch of salt** and make sure you have **sufficient UI/manual tests around the screen you're locking orientation for**.
* [If your iPhone won't turn on or is frozen](https://support.apple.com/en-us/HT201412)
> If your iPhone has a frozen screen, doesn't respond when you touch it, or becomes stuck when you turn it on, learn what to do.
>
> On your iPhone 8 or later, including iPhone SE (2nd and 3rd generation)
> 1. Press and quickly release the volume up button.
> 1. Press and quickly release the volume down button.
> 1. Press and hold the side button until you see the Apple logo.
* [[Pitch] Type Wrappers](https://forums.swift.org/t/pitch-type-wrappers/60019/45)
> I've been working on the implementation for [attached macros](https://forums.swift.org/t/pitch-attached-macros/62812), and I had the chance to implement [@Observable](https://github.com/DougGregor/swift-macro-examples/pull/6) from the Future Directions section of the [observation pitch](https://forums.swift.org/t/pitch-observation/62051). I was able to replicate the functionality in this type wrapper pitch through composition of the various attached macro capabilities.
>
> The macro approach provides more flexibility, because the macro author can decide what code to generate inside the wrapped type. The macro-expanded code then becomes much more transparent to the programmer than a subscript call, which isn't very informative in terms of what that call accomplishes. The macro still has the ability to add backing storage variables, initializers, nested types derived from stored properties of the wrapped type, and more, and the transformation can be customized depending on what kind of type the macro is attached to (e.g. a `struct` versus an `actor`).
>
> After taking the motivating use cases surfaced in this pitch thread and implementing them as macros, I'm confident that macros can fully subsume type wrappers while providing more flexibility to library authors.
* [SwiftUI TextEditor Keyboard Avoidance](https://github.com/Asperi-Demo/4SwiftUI/blob/master/PlayOn_iOS/PlayOn_iOS/Findings/TestTextEditorToFocus.swift)

```swift
struct ContentView: View {
 @State private var text: String = ""
 
 init() {
  UITextView.appearance().backgroundColor = .clear
 }
 
 @FocusState var inFocus: Int?
 var body: some View {
  ScrollViewReader { sp in
   ScrollView {
    TextEditor(text: $text).id(0)
     .focused($inFocus, equals: 0)
     .frame(height: 300)
     .background(.yellow)
    
    TextEditor(text: $text).id(1)
     .focused($inFocus, equals: 1)
     .frame(height: 300)
     .background(.mint
    
    TextEditor(text: $text).id(2)
     .focused($inFocus, equals: 2)
     .frame(height: 300)
     .background(.teal)
    
    if inFocus == 2 {
     Color.clear.frame(height: 300)
    }
   }
   .onChange(of: inFocus) { id in
    withAnimation {
     sp.scrollTo(id)
    }
   }
  }
 }
 }
}
  • A path of pain with URLCache eviction and subclassing

    If you need to control the eviction strategy, or to implement your own storage, you'll have to do custom caching completely outside Foundation URL loading.

  • How to cancel a background task in Swift

    The async/await syntax, introduced in Swift 5.5, allows a readable way to write asynchronous code. Asynchronous programming can improve performance of an app, but it is important to cancel unneeded tasks to ensure unwanted background tasks do not interfere with the app. This article demonstrates how to cancel a task explicitly and shows how the child tasks are automatically cancelled.

  • Keyboard Avoidance for SwiftUI Views

    Whenever the iOS keyboard appears, it overlaps parts of your interface. The common approach to keyboard management is to move up the focused part of the view to avoid its overlapping. In this article, let’s learn how we can solve this problem by making our SwiftUI views keyboard-aware.

  • The Swift Programming Language 5.7 : Quick Reference Guide

    The Swift Programming Language 5.7: Quick Reference Guide

  • Lenses, Transducers, and Algebraic Effects
  • Understanding transducers

    First, a bit of theory. A transducer is a function that describes the process of transformation, without knowing how exactly the thing it transforms is organized. It is not the same as generic functions, because transducers are generic in a bit different way.

January

  • Styling Components in SwiftUI

    SwiftUI has a great API for styling views independent of a view’s implementation. In this post, we’ll look at how we can style custom views in the same way.

  • Per Martin-Löf: Transcriptions

    This page collects transcriptions of lectures, mainly of a philosophical nature, given by Per Martin-Löf between the autumn of 1993 and September 2019. Most of them are published here for the first time. Each transcription contains a prefatory note outlining its origin.

  • Ice Cubes: for Mastodon

    Ice Cubes is a fast, reliable and beautiful Mastodon client.

  • ClimateTechList.com

    Joining a breakout company furthers your career growth.

    Joining a climate tech company lets you build stuff that actually matters.

    Now you can do both.

  • 30,000 lines of SwiftUI in production later: We love it but you know there was going to be a “but”

    In general, if you got yourself in a pickle — like I did several times — don’t do what I did every time: just as I naively set unnecessary/non-optimal publishers to begin with, I foolishly thought the opposite — removing any that seem unnecessary — is the way to go. Before you remove any, study the property’s trail. I had a few examples of a removal not immediately having any visible consequences, only for it become apparent after some time when the view didn’t respond to changes in a specific scenario. Finally, in case I forget again, remember an @EnvironmentObject will trigger a view update even if the view has no reference to any of its properties.

  • Container Pattern in SwiftUI

    The main idea behind container pattern revolves around two different kind of views namely container and presenter/presentation views. The container view is responsible for fetching the data, sorting, filtering and other operations and then passing it down to presentation view for display.

    In other words, container is a smart view and presenter is a dumb view. The only job of the presenter view is to display data on the screen. Data is always passed down from the container view to the presenter view.

  • Is your SwiftUI ScrollView scrolling to the wrong location in iOS 15?

    The workaround? Rejig your ForEach to return only “one view” that matches the frame you want to use with scrollTo(id:,anchor:) — in my case I map my model data to an array of new enum types that describe what the ForEach should output for each iteration.

  • Using complex gestures in a SwiftUI ScrollView

    Using complex gestures in a SwiftUI ScrollView is complicated, since they block scroll view gestures in a way that causes scrolling to stop working. I’ve looked into this, and found a way to use a button style to handle gestures in a way that doesn’t block the scrolling.

  • Pragmatic Testing and Avoiding Common Pitfalls

    The main purpose of writing tests is to make sure that the software works as expected. Tests also gives you confidence that a change you make in one module is not going to break stuff in the same or other modules.

    Not all applications requires writing tests. If you are building a basic application with a straight forward domain then you can test the complete app using manual testing. Having said that in most professional environments, you are working with a complicated domain with business rules. These business rules form the basis on which company operates and generates revenue.

    In this article, I will discuss different techniques of writing tests and how a developer can write good tests to get the most return on their investment.

  • PointFree LiveStream
  • How to setup CloudKit subscription to get notified for changes

    CloudKit subscription offers best way to keep data up-to-date for your user. I will show you the simplest setup to get started in this cloudkit subscriptions tutorial.

  • Cracking the iOS-Developer Coding Challenge, SwiftUI Edition

    In a recent post, I presented an approach for succeeding on take-home iOS-developer coding challenges. (For brevity, I henceforth refer to these particular coding challenges as “coding challenges”.) The model solution in that post used UIKit because, at the time I wrote the post, I had already completed coding challenges using that framework. But SwiftUI may be a good, or indeed the best, option.

  • Using JavaScript in a Swift app

    Calling JavaScript from Swift code is easily possible, although this isn’t friction-free. The interoperability is nowhere close to as good as between Swift and Objective-C. It’s also obvious that the JavaScriptCore API was designed for Objective-C and hasn’t been properly refined for Swift. That said, in the end, I’d rather have a more robust solution to a problem regardless of the programming language used to implement that solution, even if this means a little more friction.

  • Poly: a category of remarkable abundance

    David Spivak: "Poly: a category of remarkable abundance"

  • SwiftUI under the Hood: Variadic Views

    Matching SwiftUI’s view APIs in their ergonomics is hard to get right. In this post we’ll learn how to write view APIs that feel truly native to the platform.

  • Roc for Elm programmers

    Roc is a direct descendant of the Elm programming language. The two languages are similar, but not the same!

    This is a guide to help Elm programmers learn what's different between Elm and Roc.

  • Variadic Views

    To deal with these lists of views (e.g. during layout) we can use the underscored variadic view API. I learned about variadic views through the Moving Parts blog. I don’t know whether this API is going to change in the future, whether it’s App-Store-proof, and so on. It’s probably underscored for a good reason.

  • Gaining access to Command-line from XCTest

    We need to create an HTTP server that will be listening for requests. Then we can send a request from XCTest to the server and run whatever we want. The server can even return the output back to XCTest if required.

  • Save money when using GitHub Actions for iOS CI/CD

    For private repositories, each GitHub account receives a certain amount of free minutes and storage for use with GitHub-hosted runners, depending on the product used with the account. Any usage beyond the included amounts is controlled by spending limits.

    MacOS-based runner images are expensive for GitHub and hence GitHub applies a minute multiplier.

  • Disconnect your app from unit testing

    It is similar for SwiftUI. But the new framework did away with AppDelegate, and has a simplified main func. You simply call MyApp.main() in main.swift to start an app.

  • Lifetime of State Properties in SwiftUI

    Surprisingly Subtle

    One of the challenging parts of SwiftUI is really understanding the way it manages view state (for example, through @State and @StateObject). In theory, it’s pretty simple: anytime you want associated view state you just create a property with @State and you’re done.

  • SwiftUI Views are Lists

    The View Protocol Has A Misleading Name

    When you write SwiftUI, all your views conform to the View protocol. The name of this protocol is a bit misleading: I it could be called Views or ViewList, or something else that suggests plurals.

  • Environment Values as an Alternative to Dependency Injection in SwiftUI

    Using Environment Values to avoid unnecessary body re-evaluations and make our views more self-contained.

  • The Nested Observables Problem in SwiftUI

    Today we explore 3 solutions for this interesting problem in SwiftUI.

    The first was binding the nested property to a View @State annotation, which would definitely trigger the View Redraw, but is not a good solution leaving the View entangled with the View data nested structure. The bright side of this approach is that this solution has zero effects on the Data layers, so if you don’t want to touch other layers’ code, this is one idea.

    The second one was manually calling the objectWillChange.send(). This is also cumbersome because you need to remember to add the objectWillChange call every time you want to update the view. This is the receipt for bugs.

    And lastly, we checked what is for me the best answer to this problem. If you can, remove the nested observed object and make two simple ObservedObjects.

  • Accessing a Swift property wrapper’s enclosing instance

    However, if we take a look at the Swift Evolution proposal for the property wrappers feature, we can see that it also mentions a second, alternative way of handling a wrapper’s value...

  • func typeName( type: Any.Type, qualified: Bool = true) -> String

    Returns the demangled qualified name of a metatype.

  • func typeByName( name: String) -> Any.Type?

    Lookup a class given a name. Until the demangled encoding of type names is stabilized, this is limited to top-level class names (Foo.bar).

  • How To Speed Up Swift By Ordering Conformances

    You can generate an order file that has this result by parsing the linkmap file. All protocol conformances end in Mc so you just need the Swift symbol names matching this pattern that are in the __TEXT/__const section. You could write a detailed parser for the structure of a linkmap, but a simple grep should also do the trick:

    cat Binary-arm64-LinkMap.txt | grep -v '<<dead>>|non-lazy-pointer-to-local' | grep -o '_$.*Mc$' > order_file.txt

    That’s it! You now have your order file. You can set the Xcode "Order File" build setting to the path of this file, or check out our docs with instructions on third party build systems. For something this easy to make, it is definitely worth doing to speed up the app for iOS 15 users or the first launch after an app update on iOS 16.

  • Swift Algorithm Club

    Here you'll find implementations of popular algorithms and data structures in everyone's favorite new language Swift, with detailed explanations of how they work.

    If you're a computer science student who needs to learn this stuff for exams — or if you're a self-taught programmer who wants to brush up on the theory behind your craft — you've come to the right place!

    The goal of this project is to explain how algorithms work. The focus is on clarity and readability of the code, not on making a reusable library that you can drop into your own projects. That said, most of the code should be ready for production use but you may need to tweak it to fit into your own codebase.

    Code is compatible with Xcode 10 and Swift 4.2. We'll keep this updated with the latest version of Swift. If you're interested in a GitHub pages version of the repo, check out this.

  • Data Laced with History: Causal Trees & Operational CRDTs

    But even more remarkable is the discovery of Causal Trees and operation-based CRDTs. With this deconstruction of the CRDT formula, there’s finally a consistent way to understand, design, and implement arbitrary replicated data types. By breaking up conventional data structures into immutable micro-operations that are defined in absolute terms, giving them authorship and causality metadata, and carefully ordering them inside simple containers, you get the resilience and clarity of a convergent event log together with the efficiency of a low-level data structure. Conflict resolution can be precisely tailored to fit the needs of the data model. Operations can just as easily be sent around as-is or condensed into state snapshots. Version vectors can be used to perform garbage collection, view past revisions, or generate delta patches. Every last edit can be sourced to its author, placed in its historical and spatial context, and linked to from the outside. And all this is possible while simplifying the app’s architecture, not complicating it, since the paradigm is almost entirely functional!

  • Apple Silicon

    Get the resources you need to create software for Macs with Apple silicon.

    Build apps, libraries, frameworks, plug-ins, and other executable code that run natively on Apple silicon. When you build executables on top of Apple frameworks and technologies, the only significant step you might need to take is to recompile your code for the arm64 architecture. If you rely on hardware-specific details or make assumptions about low-level features, modify your code as needed to support Apple silicon.

    Getting the best performance on Apple silicon sometimes requires making adjustments to the way you use hardware resources. Minimize your dependence on the hardware by using higher-level technologies whenever possible. For example, use Grand Central Dispatch instead of creating and managing threads yourself. Test your changes on Apple silicon to verify that your code behaves optimally.

  • Compiling for iOS on Apple M1

    This article provides a quick overview of the compilation process and available architectures on Xcode with one goal in mind: Get a better understanding on what it means to compile for the M1.

  • Death of a Craftsman

    And all would be pretty much well if that were all there was to it. A simple, unproblemtic story. But then there is the third category: the ivory tower zealots! These are terrible! They have passion but they use it all wrong! They have principles but they are wrong! They could be into category theory instead of SOLID! Unrealistic! Unpragmatic! Proofs! Maths! Maybe they write comments or even specifications!

  • Partial block result builder fails to pick correct overload and generates compiler error

    I understand what you mean, this kind of workflow is not what is going to be supported by result builder transform implementation going forward. The result builder transform semantics are such that each element in the body is type-checked independently from others and the resulting value is then passed to a final buildBlock or a series of buildPartialBlock calls and returned just like I outlined in my example, the old implementation failed to enforce the "solved independently" bit which caused all sorts of diagnostics and performance issues.

    In your example there are two overloads of parser(of:) method, both have argument that accepts a default value which means that the type-checker won't be able to disambiguate between them without buildExpression or buildPartialBlock providing more context (via generic requirements) just like if you wrote _ = Int.parser() without using result builders.

  • Caching network data

    Your quake client fetches a list of earthquakes from the network. Now, you’ll extend the client to fetch location details for each earthquake. Each earthquake requires one additional fetch to retrieve the location information. You’ll make multiple network connections concurrently while maintaining a cache of replies.

  • Reverse Engineering SwiftUI’s NavigationPath Codability

    It’s incredible to see what Swift 5.7’s existential types unlock. They allow us to create an interface that for all intents and purposes is dynamic, being an array of Any values, while simultaneously being able to pull static type information from it when needed. This allows for building tools that are both flexible and safe, such as NavigationStack, which helps decouple domains in a navigation stack while simultaneously retaining type information to pass to destination views.

  • NavPath.swift

    Reverse engineering SwiftUI's NavigationPath

  • Transferable

    A protocol that describes how a type interacts with transport APIs such as drag and drop or copy and paste.

  • Bringing Photos picker to your SwiftUI app

    Select media assets by using a Photos picker view that SwiftUI provides.

  • Image Caching with URLCache

    Store images, and other media files to memory or storage with URLCache — an alternative to NSCache.

  • A roadmap for improving Swift performance predictability: ARC improvements and ownership control

    Swift's high-level semantics try to relieve programmers from thinking about memory management in typical application code. In situations where predictable performance and runtime behavior are needed, though, the variability of ARC and Swift's optimizer have proven difficult for performance-oriented programmers to work with. The Swift Performance team at Apple are working on a series of language changes and features that will make the ARC model easier to understand, while also expanding the breadth of manual control available to the programmer. Many of these features are based on concepts John McCall had previously sketched out in the Ownership Manifesto ([Manifesto] Ownership), and indeed, the implementation of these features will also provide a technical foundation for move-only types and the other keystone ideas from that manifesto. We will be posting pitches for the features described in this document over the next few months.

    We want these features to fit within the "progressive disclosure" ethos of Swift. These features should not be something you need to use if you're writing everyday Swift code without performance constraints, and similarly, if you're reading Swift code, you should be able to understand the non-ARC-centric meaning of code that uses these features by ignoring the features for the most part. Conversely, for programmers who are tuning the performance of their code, we want to provide a predictable model that is straightforward to understand.

  • Accessing Cached Data

    Control how URL requests make use of previously cached data.

  • Google Nearby Connections API

    Nearby Connections enables advertising, discovery, and connections between nearby devices in a fully-offline peer-to-peer manner. Connections between devices are high-bandwidth, low-latency, and fully encrypted to enable fast, secure data transfers.

    A primary goal of this API is to provide a platform that is simple, reliable, and performant. Under the hood, the API uses a combination of Bluetooth, BLE, and Wifi hotspots, leveraging the strengths of each while circumventing their respective weaknesses. This effectively abstracts the vagaries of Bluetooth and Wifi across a range of Android OS versions and hardware, allowing developers to focus on the features that matter to their users.

    As a convenience, users are not prompted to turn on Bluetooth or Wi-Fi — Nearby Connections enables these features as they are required, and restores the device to its prior state once the app is done using the API, ensuring a smooth user experience.

  • Solving "Required kernel recording resources are in use by another document" in Instruments

    So you have a Swift Package Manager project, without an xcodeproj, and you launch Instruments, and try to profile something (maybe Allocations), and you receive the message “Required kernel recording resources are in use by another document.” But of course you don’t have any other documents open in Instruments and you’re at a loss, so you’ve come here. Welcome.

  • PhotoKit

    Work with image and video assets managed by the Photos app, including those from iCloud Photos and Live Photos.

  • Are we server yet? Yes! And it's freaking fast!

    Swift has a mature and production ready framework in Vapor and Smoke, and newer ones like Hummingbird. These provide everything you’d expect from a web framework, from routing and middleware, to templating, and JSON/form handling. There are packages for everything, and more!

  • AsyncImage

    A view that asynchronously loads and displays an image.

  • withCheckedThrowingContinuation(function:_:)

    Suspends the current task, then calls the given closure with a checked throwing continuation for the current task.

  • CRAFTING INTERPRETERS

    Crafting Interpreters contains everything you need to implement a full-featured, efficient scripting language. You’ll learn both high-level concepts around parsing and semantics and gritty details like bytecode representation and garbage collection. Your brain will light up with new ideas, and your hands will get dirty and calloused. It’s a blast.

  • Sharing CloudKit Data with Other iCloud Users

    Create and share private CloudKit data with other users by implementing the sharing UI.

  • CloudKit Shared Records

    Share one or more records with other iCloud users.

  • What Advanced Data Protection for iCloud means for Tact and other apps that use CloudKit

    Advanced Data Protection (ADP) for iCloud is the most intriguing of the three, and the rest of this post will discuss how it can improve the security of your data in Tact and other CloudKit-based apps.

    TL;DR for Tact: your Tact private chats will be end-to-end encrypted if all chat members have enabled Advanced Data Protection on their accounts.

    TL;DR for any CloudKit app: your records in iCloud will be end-to-end encrypted if certain conditions are met. You have no way to verify some of the conditions on your end.

  • Designing and Creating a CloudKit Database

    Create a schema to store your app’s objects as records in iCloud using CloudKit.

    After you enable CloudKit in your app, you create a schema for your container that describes how to store your objects. A schema defines record types and the possible relationships between them. A record type is a template for the allowed keys and values of a record. This relationship is analogous to how a class (record type) defines the properties an instance (record) can have.

  • CKRecord.Reference

    A relationship between two records in a record zone.

    A CKReference object creates a many-to-one relationship between records in your database. Each reference object stores information about the one record that is the target of the reference. You then save the reference object in the fields of one or more records to create a link from those records to the target. Both records must be in the same zone of the same database.

  • task(priority:_:)

    Adds an asynchronous task to perform before this view appears.

  • task(id:priority:_:)

    Adds a task to perform before this view appears or when a specified value changes.

  • Using cktool

    cktool is stateless and passes all operations to the CloudKit Management API in single operations.

  • Clarification needed on UnsafeContinuation documentation

    They're both right. The task stops executing any async code at all before the continuation is formed, and any state will be moved off of the callstack into the task object at that point. The closure is then immediately executed in the same execution context (in other words, the current thread) with the closure as a parameter. Once the closure returns, control goes back to the executor.

  • How to update HomePod after you have enabled Advanced Data Protection for iCloud

    Learn what to do if you can’t set up or update your HomePod after Advanced Data Protection is enabled.

  • CurrentValueSubject

    A subject that wraps a single value and publishes a new element whenever the value changes.

    Unlike PassthroughSubject, CurrentValueSubject maintains a buffer of the most recently published element.

  • Freestanding Macros

    SE-0382 "Expression macros" introduces macros into Swift. The approach involves an explicit syntax for uses of macros (prefixed by #), type checking for macro arguments prior to macro expansion, and macro expansion implemented via separate programs that operate on the syntax tree of the arguments.

    This proposal generalizes the #-prefixed macro expansion syntax introduced for expression macros to also allow macros to generate declarations and statements, enabling a number of other use cases, including:

    • Subsuming the #warning and #error directives introduced in SE-0196 into macros.
    • Logging entry/exit of a function.
  • Attached Macros

    Attached macros provide a way to extend Swift by creating and extending declarations based on arbitrary syntactic transformations on their arguments. They make it possible to extend Swift in ways that were only previously possible by introducing new language features, helping developers build more expressive libraries and eliminate extraneous boilerplate.

  • GitHub Blocks — Reimagine repositories

    Extend your codebase with custom, interactive blocks.

    Build rich documentation, enhance your workflows, and bring your repository to life.

  • The latest GitHub previews ✨

    Be the first to try out GitHub’s new features

  • Truncating git history
git checkout --orphan temp e41d7f633c45c46bd42e97cecf93204191d9e4c9
git commit -m "Truncate history"
git rebase --onto temp e41d7f633c45c46bd42e97cecf93204191d9e4c9 master
  • ImageRenderer

    An object that creates images from SwiftUI views.

  • How task locals work

    Task locals are what power this library under the hood, and so it can be important to first understand how task locals work and how task local inheritance works.

    Task locals are values that are implicitly associated with a task. They make it possible to push values deep into every part of an application without having to explicitly pass the values around. This makes task locals sound like a “global” variable, which you may have heard is bad, but task locals have 3 features that make them safe to use and easy to reason about:

    • Task locals are safe to use from concurrent contexts. This means multiple tasks can access the same task local without fear of a race condition.
    • Task locals can be mutated only in specific, well-defined scopes. It is not allowed to forever mutate a task local in a way that all parts of the application observe the change.
    • Task locals are inherited by new tasks that are spun up from existing tasks.
  • ViewToPDF
import SwiftUI

extension View {
    @MainActor
    func pdf(size: ProposedViewSize) -> Data {
        let renderer = ImageRenderer(content: self)
        renderer.proposedSize = size
        var pdfData = NSMutableData()
        renderer.render { size, render in
            var mediaBox = CGRect(origin: .zero, size: size)
            let consumer = CGDataConsumer(data: pdfData)!
            let pdfContext = CGContext(consumer: consumer, mediaBox: &mediaBox, nil)!
            pdfContext.beginPage(mediaBox: &mediaBox)
            render(pdfContext)
            pdfContext.endPage()
            pdfContext.closePDF()
        }
        return pdfData as Data
    }
}
  • HTMLKit

    Create and render HTML templates with HTMLKit.

  • Writing Haskell with Chat GPT

    So overall, Chat GPT does quite well with these basic challenges! It would be interesting to take this further still and see if we could make our server program more and more complex, like adding custom functionality for different routes. But Chat GPT definitely seems useful enough to help with basic tasks, even in a less well-known language like Haskell!

  • TaskLocal

    Property wrapper that defines a task-local value key.

    A task-local value is a value that can be bound and read in the context of a Task. It is implicitly carried with the task, and is accessible by any child tasks the task creates (such as TaskGroup or async let created tasks).

  • ActorIsolated

    A generic wrapper for isolating a mutable value to an actor.

  • LockIsolated

    A generic wrapper for isolating a mutable value with a lock.

  • CKUserIdentity

    A user identity provides identifiable data about an iCloud user, including their name, user record ID, and an email address or phone number. CloudKit retrieves this information from the user’s iCloud account. A user must give their consent to be discoverable before CloudKit can provide this data to your app. For more information, see requestApplicationPermission(_:completionHandler:).

  • Neovim's Terminal Emulator

    In Neovim, we can launch a terminal emulator by running the :terminal command.

  • BindableState

    A property wrapper type that can designate properties of app state that can be directly bindable in SwiftUI views.

  • BindableAction

    An action type that exposes a binding case that holds a BindingAction.

  • BindingReducer

    A reducer that updates bindable state when it receives binding actions.

  • TCA Working with SwiftUI bindings

    Learn how to connect features written in the Composable Architecture to SwiftUI bindings.

  • TCA Store

    A store represents the runtime that powers the application. It is the object that you will pass around to views that need to interact with the application.

  • omaralbeik/Stores

    A typed key-value storage solution to store Codable types in various persistence layers like User Defaults, File System, Core Data, Keychain, and more with a few lines of code!

  • 💡 The big idea

    🧠

    Each Unison definition is identified by a hash of its syntax tree.

    Put another way, Unison code is content-addressed.

  • Unison — Mermaid

    Draw charts renderable using mermaid-js. Only sequence diagrams supported at the moment.

  • Unison — Html

    This is a small Html combinator library for building up an Html document. The API is 's heavily inspired by the Elm Html library.

  • Making Haskell lenses less pointless

    type Value f s t r = (s -> f t) -> f r

  • Mac OS X and PDF

    OS X is the first operating system on the market that actually uses PDF technology within the operating system itself. Apple calls this technology ‘Quartz’. Quartz is a layer of software that runs on top of Darwin, the core (or kernel) of the Mac OS X operating system. It is responsible for the rendering of all 2D objects. Alongside Quartz, OpenGL takes care of handling 3D data (used in games like Quake or Unreal as well as professional 3D applications like Maya) and QuickTime handles multimedia stuff (movies, sound,…).

  • Compiled and Interpreted Languages: Two Ways of Saying Tomato

    First that language specifications and implementations are very different things. Second, via the series of evolving BF implementations, that any given language can be implemented as an interpreter or a compiler.

  • Understanding SwiftUI view lifecycles

    Here are a few lessons to take away from this:

    • Different container views may have different performance and memory usage behaviors, depending on how long they keep child views alive.
    • onAppear isn’t necessarily called when the state is created. It can happen later (but never earlier).
    • onAppear can be called multiple times in some container views. If you need a side effect to happen exactly once in a view’s lifetime, consider writing yourself a onFirstAppear helper, as shown by Ian Keen and Jordan Morgan in Running Code Only Once in SwiftUI (2022-11-01).
  • Low-level Swift optimization tips

    This article documents several techniques I have found effective at improving the run time performance of Swift applications without resorting to “writing C in .swift files”. (That is, without resorting to C-like idioms and design patterns.) It also highlights a few pitfalls that often afflict Swift programmers trying to optimize Swift code.

  • Swift async/await in AWS lambdas

    The changes for this unreleased 1.0 version include, among others, the adoption of async/await. In this article we'll rewrite an existing lambda to use the latest main revision of the swift-aws-lambda-runtime package and take an early look at what the new APIs look like and how they enable us to use async/await in AWS lambdas.

  • Text modifiers in SwiftUI

    Apart from regular view modifiers in SwiftUI, there are also text modifiers. They apply specific styles to a Text view and return another Text rather than some View. We can see the list of available text modifiers in the Text view documentation under the "Styling the view’s text" and other sections. These are the ones that have Text as their return type, for example func foregroundColor(Color?) -> Text.

  • Text

    A view that displays one or more lines of read-only text.

  • Swift Protocol Witness Matching Manifesto
    • A protocol requirement (or just requirement) is a declaration inside a protocol that all conforming types must satisfy.
    • A protocol witness (or just witness) is a value or a type that satisfies a protocol requirement.
  • Unison Cloud Trailblazers Program

    We are looking for people to help us try out unison.cloud before wider public release, and also to give us early feedback on brand new Unison Cloud functionality we're developing. If you're interested in participating in this program, please fill out this short questionaire and we'll get back to everyone within 7 days about next steps. We have somewhat limited space for now, so don't hesitate if you are interested!

  • Unison HTTP Server

    A Http server for the Unison Programming Language.

  • Unison HTTP Client

    This is an HTTP client library. It can be used to make HTTP requests and inspect their responses.

  • Unison Optics

    An attempt at an optics library for Unison — now with support for indexed and coindexed(!) optics!

  • Unison Codec

    This is a library for writing compositional binary codecs that can serialize and/or deserialize Unison values to and from binary formats. Functions are provided for writing and reading values to and fromBytes,network sockets, and files.

  • Unison Language Server

    Supported features:

    • Autocompletion
    • Inline type and parser error messages
    • Show type on hover
  • Memory Safe Languages in Android 13

    To date, there have been zero memory safety vulnerabilities discovered in Android’s Rust code.

  • allowsHitTesting(_:)

    Configures whether this view participates in hit test operations.

  • UIApplicationDelegateAdaptor

    A property wrapper type that you use to create a UIKit app delegate.

  • PreviewProvider

    A type that produces view previews in Xcode.

  • Previews in Xcode

    Generate dynamic, interactive previews of your custom views.

  • Improving the speed of incremental builds

    Tell the Xcode build system about your project’s target-related dependencies, and reduce the compiler workload during each build cycle.

  • Swift UI camera app without using UIView or UI*

    In this article, I'm writing down my experience and codes that worked to get an app working in Swift UI that uses a user camera and shows a live feed on the screen. This app works in both macOS (Apple Silicon tested) and iOS.

  • Android Basics with Compose

    Welcome to Android Basics with Compose! In this course, you'll learn the basics of building Android apps with Jetpack Compose, the new UI toolkit for building Android apps. Along the way, you'll develop a collection of apps to start your journey as an Android developer.

  • How to find which data change is causing a SwiftUI view to update

    Peter Steinberger has a helpful tip for discovering when the body property of a view is being reinvoked: assign a random background color to one of its views. This will be re-evaluated along with the rest of thebody`, so if body is being called a lot then your views will flicker as they change background.

  • Previewing Stateful SwiftUI Views — Interactive Previews for your SwiftUI views

    When building UIs in SwiftUI, we tend to build two kinds of UI components: screens and (reusable) views. Usually, we start by prototyping a screen, which will inevitably result in a Massive ContentView that we then start refactoring into smaller, reusable components.

  • Auto-Completion Feature Improvements in Xcode 14

    Apple describes Xcode version 14 as "everything you need" to build software for their platforms. The company implemented a number of improvements, such as several updated auto-completion functions, to increase Xcode’s performance. Read on to find out which ones I have found particularly important and see how they work in practice.

  • func runtimeWarn(_ message: @autoclosure () -> String, file: StaticString? = nil, line: UInt? = nil)

    Xcode runtime warnings offer a much better experience than traditional assertions and breakpoints, but Apple provides no means of creating custom runtime warnings ourselves. To work around this, we hook into SwiftUI's runtime issue delivery mechanism, instead.

  • Reliably testing code that adopts Swift Concurrency?

    The calls to Task.yield feel wrong to me, but I don’t know of an alternative.

    The real problem with this code, though, is that the test occasionally fails! You can use Xcode’s “run repeatedly” feature 1,000 times and will almost always get a failure or two. From what I can gather, this is because there’s no guarantee that Task.yield will suspend long enough for the task to do its work.

    I can sprinkle in more Task.yields and the test fails less often.

  • CloudKit.Notification

    A CloudKit.Notification object represents a push notification that was sent to your app. Notifications are triggered by subscriptions that you save to the database. To subscribe to record changes and handle push notifications, see the saveSubscription method in CloudKit.Database.

  • CloudKit Remote Records

    Use subscriptions and change tokens to efficiently manage modifications to remote records.

2022

December

  • CRAttributes

    Enables collaboration on text field (and other fields) across multiple iOS devices.

    It's based on operation based CRDT with replication leveraging native CoreData CloudKit sync. A nearly vanilla implementation of CRDT RGA (operation per character).

  • Designing for Key-Value Data in iCloud

    To store discrete values in iCloud for app preferences, app configuration, or app state, use iCloud key-value storage. Key-value storage is similar to the local user defaults database; but values that you place in key-value storage are available to every instance of your app on all of a user’s various devices.

  • NSUbiquitousKeyValueStore

    An iCloud-based container of key-value pairs you use to share data among instances of your app running on a user's connected devices.

    Use the iCloud key-value store to make preference, configuration, and app-state data available to every instance of your app on every device connected to a user’s iCloud account. You can store scalar values such as BOOL, as well as values containing any of the property list object types: NSNumber, NSString, NSDate, NSData, NSArray, and NSDictionary.

  • All you need to know about CloudKit Art

    Using CloudKit is an interesting solution for local iOS applications requiring synchronization among different devices. It allows simple storage of binary data such as photos or films as well as creating a more complicated database. However, if you want to store and synchronize a small amount of data among one user’s devices (e.g. a common configuration), it’s worth thinking about using NSUbiquitousKeyValueStore which also employs iCloud and doesn’t require configuring the CloudKit container.

  • Good Spirits: Syncing Data Statelessly

    I intended for all this to lead to easy, stateless CloudKit sync. Instead of enforcing tight coupling between the persistence and cloud layers, I would have a “sync whenever” system that was guaranteed to succeed whenever it happened to run. Both the local SQLite database and CloudKit would keep around the same data and log tables. On sync, the local store would request the version vector from the CloudKit log table. Based on this timestamp, the local store would know which local check-ins needed to be uploaded, and could additionally request any check-ins from the server that were needed to complete the local database. Merge between check-ins was eventually consistent and conflict-free, and nothing was ever deleted, so you’d never need to do anything more complicated than send sets of check-ins and event log entries around. Sync would become completely stateless!

  • Developing a Distributed Data App with SwiftUI and CRDTs

    That’s all folks! In this series, we’ve seen how you can design and create your own replicating data types, and combine them to into full distributed data apps. These types have a data cost, but the payoff is that they make syncing more powerful and easier to implement. They also free your app from lock in — you can sync via any cloud service, and even peer-to-peer.

  • CKAsset

    An external file that belongs to a record.

    Use assets to incorporate external files into your app’s records, such as photos, videos, and binary files. Alternatively, use assets when a field’s value is more than a few kilobytes in size. To associate an instance of CKAsset with a record, assign it to one of its fields.

  • Sharing data between your App Clip and your full app

    Use CloudKit, Sign in with Apple, shared user defaults or containers, and the keychain to offer a smooth transition from your App Clip to your app.

  • CloudKit JS

    Provide access from your web app to your CloudKit app’s containers and databases.

  • Module vs Product vs Target
    • module: A group of interrelated sources intended to always be built together. (cf. whole module optimization). This is what is referenced when you import MyModule, and when you use its name for disambiguating a symbol: MyModule.Data vs Foundation.Data.
      • In general English, a module is “one of a set of standardized parts or independent units that can be used to construct a more complex structure”
    • target: A unit of the build result; a particular thing you might aim to build on its own. A Swift target is pretty much equal to a Swift module, so they are often used interchangeably. However module tends to refer more to the grouping whereas target refers more to the result. Another difference is that a target does not necessarily have to contain source; it could be something else such as a resource bundle and such a target is not a module. Targets are referenced by the package manifest’s .target(...) and testTarget(...), and by Xcode under “File → New → Target...” and so on.
      • In general English, a target is “a mark or point at which one fires or aims” or “an objective or result towards which efforts are directed”
    • product: A unit of functionality you want to vend to clients. Often a product is also a single target, but the reverse is not true. Many targets are not intended for others to use (such as test targets), and those are never described as products. Products are defined in the package manifest’s products argument and referenced in client target’s dependency list with .product(...) (unless the reference is reduced to a string literal).
      • In general English, a product is “an article or substance that is manufactured or refined for sale”
  • String to UInt32

    "ptru”.utf8.reduce(0) { $0 << 8 | $1 }

  • Faster Builds with Code Signing Hacks

    Code signing is one of the few operations that takes just as long to do for an incremental build as for a clean build. It also takes more time the larger an app grows in size. As a result, it can become a bottleneck for incremental builds. Here are some tricks to reduce that time. They’re all technically undocumented and may break in the future, but they’re also used by large companies with no apparent downsides.

    Note: these tricks are for debug builds only.

    Note: see how much time code signing takes during builds, i.e. how much time you can actually save, and decide if that amount matters to you.

  • vs — Autocomplete vs Graph

    Have you ever noticed how Google auto-completes your queries?

    What if we repeat the same query for every suggestion?

    Now let's repeat this process one more time for every found suggestion. But instead of drawing a picture, let's draw a line between each suggestion

    And this is exactly what this website is doing for You.

    I found this technique useful to find alternatives, or to perform a market research. Obviously, for this technique to work, Google needs to know enough about your query.

  • Faster Apple Builds with the lld Linker

    TL;DR: lld is a great choice for faster linking of debug binaries on Apple platforms. Steps on how to integrate are in the section below.

    Linking is one of the main bottlenecks for incremental builds. Thousands upon thousands of developer-hours are spent each year waiting on debug builds to link, and so linker optimization is a major topic. Linkers are complicated beasts that have to do intricate transformations on huge amounts of data at lightning speed, so it requires a lot of work. This blog post will discuss the past, present, and future of linker optimization for Apple platforms. It also includes a practical section on how to integrate lld at present. If you aren’t familiar with linking, read about it here and look for the linking step at the end of your build logs.

  • Measuring your iOS app’s true startup time in production (2018)

    Before an app even runs main and +applicationDidFinishLaunching, a considerable amount of work is done, including setting up dylibs for use, running +load methods, and more. This can take 500ms or more. Blog posts like this one show you how to measure it with the debugger, using DYLD_PRINT_STATISTICS, but it’s hard to find any help for measurement in the wild. Note the special handling for iOS 15’s pre-warming feature.

  • Getting started with CloudKit

    CloudKit is an easy way to store data in the cloud, sync between multiple devices, and share it between the app’s users. This week we will learn how to start using CloudKit in the app to save and fetch data from the cloud and sync between multiple user devices.

  • Zone sharing in CloudKit

    CloudKit provides you ready to use data sharing API that allows you to implement collaborative features of your app without much effort. There are two ways to share data via CloudKit: record sharing and zone sharing. In this post, we will talk about zone sharing.

  • Small Design Up-Front Removes Agile — part 3

    One of the footnotes in Agile is "small design up-front". Well what happens when that is done so well it removes the need for what Agile provides? During this meetup, we explored aspects of the software development life cycle that are affected by proper design. UML is an example of a 1:1 mapping of code to design documents. Instead, we can get on another level of understanding at an order of magnitude faster pace when we design information flow alone.

  • Find Problematic Constraint

    If you see a problematic constraint, copy its address from the console and use it to filter in the view debugger. The view debugger will show you the exact constraint in the user interface.

  • Efficiently Managing Multiple Async Tasks in SwiftUI

    We will use the cancellation token concept to solve an asynchronous problem in this week’s article today.

  • A Comprehensive Guide to URLs in Swift and SwiftUI

    URLs can represent all kinds of resources.

    How you handle a URL in your app depends on (a) the resource and (b) your app’s objectives and architectural considerations.

  • Drawing Paths and Shapes

    Users receive a badge whenever they visit a landmark in their list. Of course, for a user to receive a badge, you’ll need to create one. This tutorial takes you through the process of creating a badge by combining paths and shapes, which you then overlay with another shape that represents the location.

    If you want to create multiple badges for different kinds of landmarks, try experimenting with the overlaid symbol, varying the amount of repetition, or changing the various angles and scales.

    Follow the steps to build this project, or download the finished project to explore on your own.

  • UI Testing using Page Object pattern in Swift

    UI tests are expensive and fragile but vital and usable. That’s why you should take care of them as much as you take care of your main codebase. The Page Object pattern is a great way to simplify your UI tests and reuse the logic across the many UI tests.

  • Link fast: Improve build and launch times (WWDC22 Notes)

    Description: Discover how to improve your app's build and runtime linking performance. We'll take you behind the scenes to learn more about linking, your options, and the latest updates that improve the link performance of your app.

  • dyld4 design

    The goal of dyld4 is to improve on dyld3 by keeping the same mach-o parsers, but do better in the non-customer case by supporting just-in-time loading that does not require a pre-built closures.

  • Stores

    A typed key-value storage solution to store Codable types in various persistence layers like User Defaults, File System, Core Data, Keychain, and more with a few lines of code!

  • Trie in Swift, the Autocorrect Structure

    The Trie has a faster lookup than an imperfect hash map, doesn’t has key collision and the main use is to represent string dictionaries.

  • MDM restrictions for Mac computers

    You can set restrictions, including modifying a device and its features, for Mac computers enrolled in a mobile device management (MDM) solution.

  • Storing Codable structs on the disk

    Today we discussed a simple way of storing Codable structs which we can fetch via REST API. Sometimes we don’t need complicated features of CoreData for simple JSON caching and it is enough to implement disk storage.

  • A Brand-New iOS Conference in New York City

    New York, 04/18 & 04/19, 2023

  • Toggle Changes/Repos

    Toggle between changes and repositories in the Source Control navigator with the shortcut Command 2.

  • Reveal In Changes Navigator

    While you are in a file with local changes, use the shortcut Command Shift M to navigate to that file in the changes navigator.

  • Swift Evolution Visions

    Vision documents usually start by being solicited by the evolution workgroup with authority for that area. For areas within the Swift language and standard library, that is the Language Workgroup. While a vision is being developed, it is called a prospective vision, and it should be clearly identified as such. In this state, the vision carries no implicit endorsement.

    Eventually, the appropriate evolution workgroup may decide to officially approve a vision. This is an endorsement of the entire document, but the strength of that endorsement varies from section to section:

    • It is a strong endorsement of the vision's description of the current state of this part of the project. The evolution workgroup agrees with what the vision has to say about the problems the project has in this area.
    • It is a strong endorsement of the vision's stated goals for this part of the language. The evolution workgroup agrees that these are the right goals for evolution in this area to strive for, and it agrees that the vision prioritizes different goals appropriately.
    • It is a somewhat weaker endorsement of the overall approach laid out by the vision. The evolution workgroup agrees that this seems like the right basic approach to take; if it can successfully carried out, it should achieve the goals the vision lays out. However, the evolution workgroup is not committed to the details of the approach, and it may change substantially as the vision is distilled into concrete proposals and reviewed.
    • It is only a very weak endorsement of the concrete ideas for proposals in the vision document. The evolution workgroup thinks these sound like the right ideas in the abstract but is not committed to any of them. The proposals will all need to go through normal evolution review, and they may be rejected or substantially changed from how they appear in the vision.

    Once the vision is approved, it acts as a foundation for subsequent pitches and proposals in its area. Pitches and proposals that implement or build on part of a vision should generally link back to the vision document.

    Vision documents are artifacts of the design process; they are not substitutes for language or release documentation. It is not expected that authors will continually update the vision document as the proposals emerging from it change. Revision may be appropriate if the vision document is actively causing confusion, for example because of a major shift in terminology since the document's development.

  • Why does Apple recommend to use structs by default?

    The consequence of this recommendation is that, from what I've seen in many projects, people tend to declare gigantic structs (esp. their JSON objects) and pass them around in functions or assign to variables without thinking that it can be a waste of memory and CPU cycles. In some edge cases the overhead can be significant and can be felt by the users.

  • Building custom layout in SwiftUI. LayoutValueKey.

    SwiftUI provides us with the LayoutValueKey protocol allowing us to register a custom layout parameter. We can use this type to attach any value we need to a view inside the layout and extract this value later in the layout cycle.

  • BuildSettingCondition

    A condition that limits the application of a build setting.

    By default, build settings are applicable for all platforms and build configurations. Use the .when modifier to define a build setting for a specific condition. Invalid usage of .when emits an error during manifest parsing. For example, it’s invalid to specify a .when condition with both parameters as nil.

  • Categories for AI

    This lecture series consists of 2 parts , these being: the introductory lectures and the seminars. During the first part we'll have 1-2 introductory lectures per week, where we will teach the basics of category theory with a focus on applications to Machine Learning.

    The seminars will be deep dives into specific topics of Category Theory, some already showing applications to Machine Learning and some which have not beeen applied yet.

  • Xcode — Writing Testable Code

    The Xcode integrated support for testing makes it possible for you to write tests to support your development efforts in a variety of ways. You can use tests to detect potential regressions in your code, to spot the expected successes and failures, and to validate the behavior of your app. Testing improves the stability of your code by ensuring that objects behave in the expected ways.

    Of course, the level of stability you achieve through testing depends on the quality of the tests you write. Likewise, the ease of writing good tests depends on your approach to writing code. Writing code that is designed for testing helps ensure that you write good tests. Read the following guidelines to ensure that your code is testable and to ease the process of writing good tests.

    • Define API requirements. It is important to define requirements and outcomes for each method or function that you add to your project. For requirements, include input and output ranges, exceptions thrown and the conditions under which they are raised, and the type of values returned (especially if the values are instances of classes). Specifying requirements and making sure that requirements are met in your code help you write robust, secure code. See the Unit Testing Apps and Frameworks sample-code project for an example of using exceptions to identify and report incorrect library usage by client code.
    • Write test cases as you write code. As you design and write each method or function, write one or more test cases to ensure that the API’s requirements are met. Remember that it’s harder to write tests for existing code than for code you are writing.
    • Check boundary conditions. If a parameter for a method must have values in a specific range, your tests should pass values that include the lowest and highest values of the range. For example, if a procedure has an integer parameter that can have values between 0 and 100, inclusive, the test code for that method should pass the values 0, 50, and 100 for the parameter.
    • Use negative tests. Negative tests ensure that your code responds to error conditions appropriately. Verify that your code behaves correctly when it receives invalid or unexpected input values. Also verify that it returns error codes or raises exceptions when it should. For example, if an integer parameter must have values in the range 0 to 100, inclusive, create test cases that pass the values -1 and 101 to ensure that the procedure raises an exception or returns an error code.
    • Write comprehensive test cases. Comprehensive tests combine different code modules to implement some of the more complex behavior of your API. Although simple, isolated tests provide value, stacked tests exercise complex behaviors and tend to catch many more problems. These kinds of tests mimic the behavior of your code under more realistic conditions. For example, in addition to adding objects to an array, you could create the array, add several objects to it, remove a few of them using different methods, and then ensure that the set and number of remaining objects are correct.
    • Cover your bug fixes with test cases. Whenever you fix a bug, write one or more tests cases that verify the fix.
  • XCTIssue

    An object that represents a test failure, and includes source code call stacks for test reporting and investigation.

  • Adding SQLCipher to Xcode Projects

    SQLite is already a popular API for persistent data storage in iOS apps so the upside for development is obvious. As a programmer you work with a stable, well-documented API that happens to have many good wrappers available in Objective-C, such as FMDB and Encrypted Core Data. All security concerns are cleanly decoupled from application code and managed by the underlying framework.

    The framework code of the SQLCipher project is open source, so users can be confident that an application isn't using insecure or proprietary security code. In addition, SQLCipher can also be compiled on Android, Linux, macOS and Windows for those developing cross-platform applications.

There are two different options for integrating SQLCipher into an Xcode project. The first involves building the SQLCipher source amalgamation into the application. The second involves using CocoaPods. These tutorials assume familiarity with basic iOS or macOS app development and a working install of Xcode.

  • Internet Archive Scholar

    Search Millions of Research Papers

    This fulltext search index includes over 25 million research articles and other scholarly documents preserved in the Internet Archive. The collection spans from digitized copies of eighteenth century journals through the latest Open Access conference proceedings and pre-prints crawled from the World Wide Web.

  • The Future of Foundation

    Today, we are announcing a new open source Foundation project, written in Swift, for Swift.

    This achieves a number of technical goals:

    • No more wrapped C code. With a native Swift implementation of Foundation, the framework no longer pays conversion costs between C and Swift, resulting in faster performance. A Swift implementation, developed as a package, also makes it easier for Swift developers to inspect, understand, and contribute code.
    • Provide the option of smaller, more granular packages. Rewriting Foundation provides an opportunity to match its architecture to evolving use cases. Developers want to keep their binary sizes small, and a new FoundationEssentials package will provide the most important types in Foundation with no system dependencies to help accomplish this. A separate FoundationInternationalization package will be available when you need to work with localized content such as formatted dates and time. Other packages will continue to provide XML support and networking. A new FoundationObjCCompatibility package will contain legacy APIs which are useful for certain applications.
    • Unify Foundation implementations. Multiple implementations of any API risks divergent behavior and ultimately bugs when moving code across platforms. This new Foundation package will serve as the core of a single, canonical implementation of Foundation, regardless of platform.

    And this also achieves an important community goal:

    • Open contribution process. Open source projects are at their best when the community of users can participate and become a community of developers. A new, open contribution process will be available to enable all developers to contribute new API to Foundation.
  • Coduo lets you share and collaborate in Xcode

    Pair program and Chat in real-time. Simple, fast and effective.

  • Advanced Data Protection for iCloud

    Advanced Data Protection for iCloud is an optional setting that offers Apple’s highest level of cloud data security. When a user turns on Advanced Data Protection, their trusted devices retain sole access to the encryption keys for the majority of their iCloud data, thereby protecting it with end-to-end encryption. For users who turn on Advanced Data Protection, the total number of data categories protected using end-to-end encryption rises from 14 to 23 and includes iCloud Backup, Photos, Notes and more.

    Advanced Data Protection for iCloud will be available to U.S. users by the end of 2022 and will start rolling out to the rest of the world in early 2023.

    Conceptually, Advanced Data Protection is simple: All CloudKit Service keys that were generated on device and later uploaded to the available-after-authentication iCloud Hardware Security Modules (HSMs) in Apple data centers are deleted from those HSMs and instead kept entirely within the account’s iCloud Keychain protection domain. They are handled like the existing end-to-end encrypted service keys, which means Apple can no longer read or access these keys.

    Advanced Data Protection also automatically protects CloudKit fields that third-party developers choose to mark as encrypted, and all CloudKit assets.

  • Fundamentals of Lambda Calculus

    Lambda calculus is a formal system to study computable functions based on variable binding and substitution. Introduced in the 1930s by Alonzo Church, it is (in its typed form) the fundamental concept of functional programming languages like Haskell and Scala. Although the topic might seem very theoretical, some basic knowledge in lambda calculus can be very helpful to understand these languages, and where they originated from, much better. The goal of this article is to introduce some basic concepts of lambda calculus, which later on can be mapped to real world usage scenarios with functional programming languages.

  • Unlisted app distribution

    Release your apps that aren’t suited for public distribution as unlisted on the App Store, discoverable only with a direct link. Unlisted apps don’t appear in any App Store categories, recommendations, charts, search results, or other listings. In addition, they can be accessed through Apple Business Manager and Apple School Manager. Apps for partner sales tools, employee resources, or research studies are examples of good candidates for unlisted distribution.

    Distribute your app to:

    • Limited audiences (such as part-time employees, franchisees, partners, business affiliates, higher-education students, or conference attendees) through a standard link that’s usable on the App Store and Apple School Manager or Apple Business Manager.
    • Employee-owned devices that aren’t eligible to be managed through Apple School Manager or Apple Business Manager.
    • Managed and unmanaged devices.
    • All regions that are supported by the App Store.
  • You might not need a CRDT

    In developer discourse, the term CRDT sometimes gets thrown around as a synecdoche for a broader set of techniques to enable Figma-like collaborative features. But when we started talking to dozens of companies building ambitious browser-based apps, we found it rare for apps to use true CRDTs to power multiplayer collaboration.

  • Prototyping SwiftUI interfaces with OpenAI's ChatGPT

    Understand how to use OpenAI's ChatGPT conversational machine learning model to create working code for SwitfUI apps within a few minutes.

  • Encode and decode polymorphic types in Swift

    Swift’s protocol oriented programming is very helpful when dealing with polymorphic situations. But when we need to persist polymorphic data, we encounter some issues: Codable is not able to determine what concrete type to decode the saved data into.

    In this post, I will share a cut down version of the polymorphic Codable system that I have been using in my apps. I suggest you first take a look at my last post Reduce Codable Boilerplate with the Help of Property Wrappers that introduced how to use property wrappers in complex Codable situations.

  • Native Network Monitoring In Swift

    We'll take a look at a native solution for monitoring network connectivity on iOS with Swift 5 and how to use the Network Link Conditioner.

  • NWPathMonitor

    An observer that you use to monitor and react to network changes.

  • dataTile for Simulator

    Forget debugging in console

    Automatically replace messy logs with beautiful visual data.

  • How to manage build settings using Xcode configuration files

    Xcode build configuration files are quite useful to manage configuration properties between different environments. You can also use them to easily assing a different app name and an app icon for specific environment.

  • The Best Refactoring You've Never Heard Of

    In physics, Feynman tells us that you cannot memorize formulas. You can't just go to the book, memorize formula, learn to apply it. There's too many of them. Instead, we need to learn about the relationships between the formulas. Derive them for yourself when they're needed.

    And so, we want to do the same in programming. See many different APIs, many concepts, and see how there are fewer deeper ideas behind them. So, instead of seeing a bunch of approaches to a problem, I want you to see a web of, a single design and various ways you manipulate it to exactly the outcome that you want. So these are many transformations. Each of them could be their own talk or article. But today, I've taught you one very important transformation, not to rule them all but to rule a lot of them. And that is to, everyone with me, defunctionalize the continuation!

  • Encoding and Decoding Custom Types

    Make your data types encodable and decodable for compatibility with external representations such as JSON.

    Many programming tasks involve sending data over a network connection, saving data to disk, or submitting data to APIs and services. These tasks often require data to be encoded and decoded to and from an intermediate format while the data is being transferred.

    The Swift standard library defines a standardized approach to data encoding and decoding. You adopt this approach by implementing the Encodable and Decodable protocols on your custom types. Adopting these protocols lets implementations of the Encoder and Decoder protocols take your data and encode or decode it to and from an external representation such as JSON or property list. To support both encoding and decoding, declare conformance to Codable, which combines the Encodable and Decodable protocols. This process is known as making your types codable.

  • We Fast-Tracked Our App Development With Kotlin Multiplatform Mobile

    Motive Fleet is a mobile app available on both Android and iOS, which our customers use to access critical real time information about their fleets and drivers on the go. We are continually adding new features to Motive Fleet to enhance our customers’ experience. To execute faster and ensure consistency in business logic across our Android and iOS mobile platforms, we have been exploring mobile cross-platform tools. Our goals included easier code management, fewer bugs, better build quality, and improved development timelines—and we achieved them with Kotlin Multiplatform Mobile (KMM). Read this blog to learn:

    • Advantages of a code-sharing framework
    • Kotlin Multiplatform Mobile (KMM) evaluation
    • Learnings & challenges from integrating KMM in our Motive Fleet app
    • Impact of KMM and future work

November

  • First Introduction to Cubical Type Theory

    This page aims to present a first introduction to cubical type theory, from the perspective of a mathematician who has heard about type theory but has no previous familiarity with it. Specifically, the kind of mathematician that we are appealing to is one who is familiar with some of the ideas in category theory and homotopy theory — however, the text also presents the concepts syntactically, in a way that can be read without any prior mathematical knowledge.

  • How to use a .xcconfig file and a .plist file with SPM

    How to use a .xcconfig file and a .plist with a Swift Package Manager based project.

  • Exploring the iOS Live Activities API

    In this article, we’ll explore the advantages of Live Activities and the ActivityKit framework that is used for displaying and working with Live Activities. In the demo portion of the article, we’ll show how to add Live Activities to a simple stock tracking app to display real-time information on both the Lock Screen and in the Dynamic Island.

  • Ruby adds a new core class called Data to represent simple immutable value objects

    Ruby 3.1 adds a new core class called Data to represent simple immutable value objects. The Data class helps define simple classes for value-alike objects that can be extended with custom methods.

    While the Data class is not meant to be used directly, it can be used as a base class for creating custom value objects. The Data class is similar to Struct, but the key difference being that it is immutable.

  • Clean waiting in XCUITest

    At Cookpad, we wanted an extension method on XCUIElement similar to XCTestCase.waitForExpectations(timeout:handler:) to make tests readible, but we also have more expectations to wait for than just existence, and we didn’t want to create multiple methods to do very similar things e.g. waitUntilHittable, waitUntilLabelMatches etc.

    Additionally, we didn’t want to sleep as an expectation might occur before the timeout and we waited too long, or the opposite, and we didnt wait long enough and spent time verifying false positives. As a result, we created a solution utilising take-aways from all of the aforementioned techniques.

  • Getting started with Scrumdinger

    Learn the essentials of iOS app development by building a fully functional app using SwiftUI.

  • SwiftUI is convenient, but slow

    But I'd like to draw attention to some performance limitations, in the hope that a SwiftUI engineer might see this and understand pain points that might not be so obvious from their side.

  • canDeriveCodable(NominalTypeDecl *NTD, KnownProtocolKind Kind)

    Structs, classes and enums can explicitly derive Encodable and Decodable conformance (explicitly meaning we can synthesize an implementation if a type conforms manually).

  • Building custom layout in SwiftUI. Basics.

    Nowadays, SwiftUI provides the Layout protocol allowing us to build super-custom layouts by digging into the layout system without using GeometryReader. Layout protocol brings us the incredible power of building and reusing any layout you can imagine.

  • An Approach for Migrating From Objective-C to Swift
    • Create Swift islands and expand them over time.
    • Create shims for existing Objective-C objects to call your new Swift ones.
    • Use value types within the Swift portions of your codebase, and wrap them in Objective-C compatible reference types for the Objective-C parts.
    • Try to convert the ‘messaging space’ of each subsystem to Swift as early as possible, and then the messaging space between subsystems. Wrap Objective-C types that you’re not ready to tackle yet with Swift friendly interfaces. If these are working well for you, then they can stay as Objective-C on the inside for years.
  • What is the @objcMembers attribute? (2019)

    If you just want to expose a single method or property, you can mark that method using the @objc attribute. However, if you want all methods in a class to be exposed to Objective-C you can use a shortcut: the @objcMembers keyword.

  • Understanding different cache policies when working with URLRequest in Swift

    By choosing a cache policy, we can decide whether the caching should depend on expiration dates or disabled entirely or whether the server should be contacted to determine if the content has changed since the last request.

  • Slow App Startup Times (2016)

    A lot happens before the system executes your app’s main() function and calls app delegate functions like applicationWillFinishLaunching. Before iOS 10 it was not easy to understand why an app was slow to launch for reasons other than your own code. It has been possible to add the DYLD_PRINT_STATISTICS environment variable to your project scheme but the output was hard to figure out. With iOS 10 Apple has made the output from enabling DYLD_PRINT_STATISTICS much easier to understand.

  • Diagnostic flags in Clang

    This page lists the diagnostic flags currently supported by Clang.

  • ObjectIdentifier

    A unique identifier for a class instance or metatype.

    This unique identifier is only valid for comparisons during the lifetime of the instance. In Swift, only class instances and metatypes have unique identities. There is no notion of identity for structs, enums, functions, or tuples.

  • Notes for working with Xcode VIM mode

    This document is a scratchpad for helping me learn commonly used actions in Xcode's VIM mode.

    Commands are case-sensitive. A command of N means pressing shift + n on the keyboard.

  • Kotlin/Native as an Apple framework — tutorial

    Kotlin/Native provides bi-directional interoperability with Objective-C/Swift. Objective-C frameworks and libraries can be used in Kotlin code. Kotlin modules can be used in Swift/Objective-C code too. Besides that, Kotlin/Native has C Interop. There is also the Kotlin/Native as a Dynamic Library tutorial for more information.

  • The evolution of scalable CSS

    A deep dive into the problems with scaling CSS on large projects. Understand the evolution of CSS best practices.

  • Introduction to SwiftUI Modularisation with SPM

    Today we did a brief introduction to local SPM packages and how to prepare your app for modularisation. A checklist was used to guide us in the whole process which make things easier for anyone who wants to start this endeavor.

  • Why is Rosetta 2 fast?

    I believe there’s significant room for performance improvement in Rosetta 2, by using static analysis to find possible branch targets, and performing inter-instruction optimisations between them. However, this would come at the cost of significantly increased complexity (especially for debugging), increased translation times, and less predictable performance (as it’d have to fall back to JIT translation when the static analysis is incorrect).

    Engineering is about making the right tradeoffs, and I’d say Rosetta 2 has done exactly that. While other emulators might require inter-instruction optimisations for performance, Rosetta 2 is able to trust a fast CPU, generate code that respects its caches and predictors, and solve the messiest problems in hardware.

  • XCFrameworks

    This post is about how one bad assumption about XCFrameworks turned into multiple hours of needless effort. I wanted to quickly share my experience so others could avoid falling into the same pitfall. In retrospect, the problem seems obvious, but it wasn’t when I just encountered it.

  • When does a SwiftUI Environment get retained?

    The answer depends on how we use SwiftUI. For an app entirely written using it, one might argue that it gets released whenever the app finishes. But what about an UIKit app that uses some SwiftUI views?

    1. Dispose of any SwiftUI View values not used anymore
    2. Dispose of any UIHostingController references not used anymore
    3. Watch out for memory leaks in:
      • UIViews used within SwiftUI
      • references between your UIViews and your environment objects
      • UIViewControllers presenting the UIHostingControllers
      • the environment objects themselves

October

  • ComposableArchitecture Documentation

    The Composable Architecture (TCA, for short) is a library for building applications in a consistent and understandable way, with composition, testing, and ergonomics in mind. It can be used in SwiftUI, UIKit, and more, and on any Apple platform (iOS, macOS, tvOS, and watchOS).

  • Non-exhaustive testing in the Composable Architecture

    Testing is by far the #1 priority of the Composable Architecture. The library provides a tool, the TestStore, that makes it possible to exhaustively prove how your features evolve over time. This not only includes how state changes with every user action, but also how effects are executed, and how data is fed back into the system.

    The testing tools in the library haven’t changed much in the 2 and a half years since release, but thanks to close collaboration with Krzysztof Zabłocki and support from his employer, The Browser Company, the 0.45.0 release of the library brings first class support for “non-exhaustive” test stores.

  • Create a bootable Ventura USB drive using Terminal

    Another method to make a bootable USB drive is createinstallmedia command in Terminal.

    1. Rename USB Volume to MyVolume
    2. Now type the following command into the Terminal window: sudo /Applications/Install\ macOS\ Ventura.app/Contents/Resources/createinstallmedia --volume /Volumes/MyVolume
  • macOS 13 Ventura Final & Beta Full Installers

    This database will contain download links for macOS 13 Ventura full Installer pkg files (InstallAssistant.pkg). This file is the same full installer that you would download directly from the App Store for Intel and Apple Silicon M1 Mac Computers. The InstallAssistant.pkg is stored on Apple’s servers and contains the full “Install macOS.app”. Once downloaded, all you need to do is install the pkg and the full installer of macOS will be in your applications folder. This change was made when Apple revised the full installer for Big Sur. The InstallAssistant.pkg is not available for Catalina or Mojave.

  • Swift Concurrency - Things They Don’t Tell You

    Swift Concurrency provides a really nice way of writing asynchronous code. Support for async-await has been to me the most awaited feature in Swift.

    However, with great power comes great responsibility. If you learn from tutorials or even from the documentation, it’s really hard to find some details on how it works under the hood. Basically, Swift Concurrency is advertised as safe to use, because in theory the correctness is being checked by the compiler.

    This way of “selling” Swift Concurrency encourages people to just jump in, add async-await to an existing code, and run some Tasks not really knowing what is going on under the hood. Unfortunately, there are many traps around concurrency, and no… the compiler doesn’t check everything.

    To be honest, even after performing tests, reading documentation, and watching WWDC I’m still not fully confident with Swift Concurrency. Although, I will try to share with you some of my observations hopefully making you more aware.

  • Decentralized Social Networking Protocol (DSNP) specification

    The free communication of users on the Internet faces a variety of problems in the modern day. These challenges include censorship from state and corporate actors, the amplification of misinformation through viral content, and an ever-shrinking collection of near monopolies with absolute power over social interaction in the twenty-first century. Through the DSNP, we hope to mitigate and ideally solve these challenges in the way social interaction operates online.

  • YOU MIGHT NOT NEED JAVASCRIPT

    JavaScript is great, and by all means use it, while also being aware that you can build so many functional UI components without the additional dependancy.

    Maybe you can include a few lines of utility code, or a mixin, and forgo the requirement. If you're only targeting more modern browsers, you might not need anything more than what the browser ships with.

  • Apple Security Research

    Our groundbreaking security technologies protect the users of over 1.8 billion active devices around the world. Hear about the latest advances in Apple security from our engineering teams, send us your own research, and work directly with us to be recognized and rewarded for helping keep our users safe.

  • Xcode 14 Single Size App Icon

    Starting with Xcode 14, when you create a new iOS project, the app icon in the asset catalog defaults to the new “Single Size”. Instead of the full set of icon sizes there’s a single slot for a 1024×1024 point image that the system resizes as needed.

  • Swift Concurrency — Things They Don’t Tell You

    Swift Concurrency provides a really nice way of writing an asynchronous code. Support for async-await has been to me the most awaited feature in Swift.

    However, with great power comes great responsibility. If you learn from tutorials or even from the documentation, it’s really hard to find some details on how it works under the hood. Basically, Swift Concurrency is advertised as safe to use, because in theory the correctness is being checked by the compiler.

    This way of “selling” Swift Concurrency encourages people to just jump in, add async-await to an existing code, and run some Tasks not really knowing what is going on under the hood. Unfortunately, there are many traps around concurrency and no… the compiler doesn’t check everything.

    To be honest, even after performing tests, reading documentation, and watching WWDC I’m still not fully confident with Swift Concurrency. Although, I will try to share with you some of my observations hopefully making you more aware.

  • Swift Compiler Driver

    The swift-driver project is a new implementation of the Swift compiler driver that is intended to replace the existing driver with a more extensible, maintainable, and robust code base. The specific goals of this project include:

    • A maintainable, robust, and flexible Swift code base
    • Library-based architecture that allows better integration with build tools
    • Leverage existing Swift build technologies (SwiftPM, llbuild)
    • A platform for experimenting with more efficient build models for Swift, including compile servers and unifying build graphs across different driver invocations
  • The Swift Driver, Compilation Model, and Command-Line Experience

    The Swift compiler's command-line interface resembles that of other compilers, particularly GCC and Clang. However, Swift's compilation model and some of its language features make it a bit tricky to plug into a larger build system. In particular, there's no correct way to specify a "one command per file" build rule for a normal Swift module.

  • Swift Driver Design & Internals

    This document serves to describe the high-level design of the Swift 2.0 compiler driver (which includes what the driver is intended to do, and the approach it takes to do that), as well as the internals of the driver (which is meant to provide a brief overview of and rationale for how the high-level design is implemented).

    The Swift driver is not intended to be GCC/Clang compatible, as it does not need to serve as a drop-in replacement for either driver. However, the design of the driver is inspired by Clang's design

  • Swift Driver Parseable Driver Output

    This document serves to describe the parseable output format provided by the Swift compiler driver with the "-parseable-output" flag. This output format is intended to be parsed by other programs; one such use case is to allow an IDE to construct a detailed log based on the commands the driver issued.

  • iOS Ref

    iOS Ref was created in January 2018 by me to serve as a one-stop quick reference spot for iOS developers.

  • Where View.task gets its main-actor isolation from

    SwiftUI’s .task modifier inherits its actor context from the surrounding function. If you call .task inside a view’s body property, the async operation will run on the main actor because View.body is (semi-secretly) annotated with @MainActor. However, if you call .task from a helper property or function that isn’t @MainActor-annotated, the async operation will run in the cooperative thread pool.

  • Developer guide on the iOS file system

    Learn how to work with files and directories when developing iOS applications.

    In this developer guide, we'll look at the organisation of APFS and the rules that apply to our code when we develop iOS applications.

  • Codeface

    See the architecture of any codebase!

    Codeface visualises the internal composition, dependencies and quality metrics of code to help you understand, improve and monitor it.

  • Check if two values of type Any are equal

    In Swift 5.7 that comes with Xcode 14 we can more easily check if two values of type Any are equal, because we can cast values to any Equatable and also use any Equatable as a parameter type thanks to Unlock existentials for all protocols change.

  • @StateObject vs. @ObservedObject: The differences explained

    @StateObject and @ObservedObject have similar characteristics but differ in how SwiftUI manages their lifecycle. Use the state object property wrapper to ensure consistent results when the current view creates the observed object. Whenever you inject an observed object as a dependency, you can use the @ObservedObject.

    Observed objects marked with the @StateObject property wrapper don’t get destroyed and re-instantiated at times their containing view struct redraws. Understanding this difference is essential in cases another view contains your view.

  • Swift was always going to be part of the OS

    Recently on the Swift Forums, someone complained that putting Swift in the OS has only made things worse for developers. My immediate reaction is a snarky “welcome to the world of libraries shipped with the OS”, but that’s not helpful and also doesn’t refute their point. So here’s a blog post that talks about how we got where we did, covering time when I worked on Swift at Apple. But I’m going to have to start a lot earlier to explain the problem…

  • Dynamic Linking Is Bad For Apps And Static Linking Is Also Bad For Apps

    A recent question on the Swift forums prompted me to actually write this blog post I’ve been idly thinking about for a long time. These days, it’s common for apps to have external dependencies, but both statically linking and dynamically linking those dependencies comes with drawbacks. (This is the same thing as the title, only less provocative.) Why is there this tension and what can be done about it?

  • [unsafeFlags(::)](https://developer.apple.com/documentation/packagedescription/swiftsetting/unsafeflags(::)

    Set unsafe flags to pass arbitrary command-line flags to the corresponding build tool.

    e.g swiftSettings: [.unsafeFlags(["-Xfrontend", “-enable-bare-slash-regex”])]

  • Mastering NavigationStack in SwiftUI. NavigationPath.

    Today we learned how to use the NavigationPath type to push different views programmatically without defining additional types. We also learned how to serialize and store the current state of navigation in the scene storage to provide a better user experience.

  • withThrowingTaskGroup(of:returning:body:)

    Starts a new scope that can contain a dynamic number of throwing child tasks.

  • Rive

    The new standard for interactive graphics

    Blazing fast. Tiny size. Runs everywhere.

September

  • Universals to the right, Existentials to the left: the adjoint triple "Exists ⊣ Const ⊣ Forall"

    Exists @k ⊣ Const @k ⊣ Forall @k

  • How To Deploy a Kotlin API With http4k and Heroku

    This guide describes how to generate a Kotlin API using the http4k Project Wizard, and goes over what configurations and steps you'll need in order to deploy it (and other Kotlin APIs) to Heroku.

  • TCA Action Boundaries

    As I described in the exhaustivity testing article, a larger scale usually means discovering issues you might not have experienced with smaller apps. I'll cover more of them and my suggested solutions shortly, but today, I want to talk about Actions, their lack of boundaries, and what it entails.

  • DynamicIsland

    The layout and configuration for a Live Activity that appears in the Dynamic Island.

  • SwiftUI Navigation & URL Routing — Brandon Williams

    After a brief overview of how SwiftUI's new NavigationStack API works, we'll explore how to build a router that can transform nebulous URLs into state that drives deep-linking in your application. Then, almost magically, that same code will be used to power a server-side application for generating deep-linking URLs.

  • How to Use an Infrared Sensor With the Raspberry Pi Pico

    How can one use an infrared sensor with the Raspberry Pi Pico? With Raspberry Pi rolling out the all new Raspberry Pi Pico now, this is a rather common query for makers.

    An infrared sensor is a sensor that can measure the infrared light / electromagnetic radiation emitted by an object thereby detecting its presence. In this blog, we shall take a look at writing a program to use an infrared sensor with the Raspberry Pi Pico.

  • Nate's adjoint 5-tuple

    In August, Nate Soares visited Topos Institute. We told him a little about Poly and Proly Proly and he told us about what he wanted from a type theory.

  • Mastering Dynamic Island in SwiftUI

    In this post, we will discuss possible configurations and customization points of the dynamic island feature using the new API available in the WidgetKit framework.

  • Live Activities (HUD)

    A Live Activity displays up-to-date information from your app, allowing people to view the progress of events or tasks at a glance.

    Live Activities help people keep track of tasks and events that they care about, offering persistent locations for displaying information that updates frequently. For example, a food delivery app could display the time remaining until a food order arrives, or a sports app can display the score for an ongoing game.

  • Polynomial functors and lenses

    The category of polynomial functors is the free coproduct completion of Setsop. Equivalently, it is the total space of the family fibration of Setsop. More concretely, an object of Poly is given by a set I and a family of sets A:I→Sets.

  • The iOS Engineer’s Guide to Beginning Kotlin Multiplatform Development
    • One of the most essential skills for Kotlin Multiplatform Mobile cross-platform development is sensitivity to what code is platform-dependent or not.
    • Platform-dependent code can be written entirely in Kotlin using KMM’s expect and actual syntax or by defining an interface in the KMM common module and implementing it natively in Android (using Kotiln) and iOS (using Swift).
    • Platform-independent code is written inside the KMM shared framework and can be used for any business logic for your application that does not directly depend upon any platform-specific code.
    • Given the complexities of writing multi-platform code, this post provides an overview, and future posts will dive deeper into these topics.
  • Displaying live activities in iOS 16

    One of the most prominent features of iOS 16 is live activity widgets. iOS 16 allows us to display the live state of ongoing activities from our apps on the lock screen or in the Dynamic Island of the new iPhone 14 Pro. This week we will learn how to build live activity widgets for our apps using the new ActivityKit framework.

  • Simplify Your React Component’s State With a State Machine

    Use a reducer to implement a fully-typed state machine without breaking a sweat.

  • Composable Architecture @ Scale

    Last week I spoke at NSSpainX to talk about how to use Composable Architecture in larger projects, the kind of issues you might run into and how you can work around them.

  • Roughly: tags, IDs (thrice), limits, pagination.

    After using AWS for ~14 years, I've internalised a handful of design patterns that I try to apply to my own software. I'm keen to know if it's the same for other folks.

  • SwiftUI's diffing algorithm
    • Unary views: Views with a single displayable, such as shapes, colors, controls and labels.
    • Structural views: Views that take zero or more other views, and combines them into a view with some subset of their displayables. Examples: ForEach, EmptyView, and the views used by ViewBuilder, such as TupleView and _ConditionalView.
    • Container views: Views that take the displayables of another view and manage them by deciding whether they should be displayed and how they should be laid out. Examples: HStack, VStack, List, LazyVStack.
    • Modifiers: Views that take one other view, and change the layout or look of all of its displayables individually. Examples: the views that modifiers such as .border, .padding, .frame generate, which are of type ModifiedContent.
  • What's the "any" keyword? Understanding Type Erasure in Swift

    The concept of Type Erasure is not new to Swift, but was radically improved in Swift 5.7 with the addition of the any prefix keyword (not to be confused with the capitalized Any type!) and improvements to the already existing some Opaque Type keyword. In this article, we'll explain the concept of type erasure, how it used to be done, what's different in Swift 5.7, and how these changes work under the hood.

  • A functional (programming) approach to error handling in Typescript

    Typescript and Javascript provide an error handling strategy based on the try/catch syntax which allows the programmer to escape the normal flow of the program in the presence of errors. This way of doing error handling certainly does its job but there are drawbacks that are often just accepted without giving too much thought about it. In this post, I will detail what these drawbacks are and how some ideas from functional programming can help to overcome them.

  • Using generics in Arrow functions in TypeScript
> ```typescript
> const returnInArray = <T>(value: T): T[] => [value];
> ```
  • Domain Driven Design using GADTs

    We used this approach in aws-lambda-haskell-runtime. Since Lambda results and errors must have a differently formatted body depending on the proxy (API Gateway, ALB, etc.), we used GADTs to make illegal states unrepresentable.

  • How 5 iOS apps could improve their startup time by an average of 28%

    Milliseconds matter

    Startup time is a crucial app metric that should be continuously monitored and improved. A/B tests at top mobile app companies consistently show that adding just fractions of a second can significantly hurt core usage metrics, such as daily active users and time spent on the app per user per day.

    Lyft reported a 5% increase in user sessions thanks to a 21% decrease in startup time for their driver app. Apple has made startup time the subject of numerous WWDC presentations.

  • Cancel or change the payment method for your AppleCare plan

    Make changes to your AppleCare+ plan or AppleCare Protection Plan.

    If you paid in full upfront for your AppleCare plan

  • The SwiftUI Layout Protocol – Part 1

    One of the best SwiftUI additions this year has to be the Layout protocol. Not only we finally get our hands in the layout process, but it is also a great opportunity to better understand how layout works in SwiftUI.

    Creating a basic layout is not hard, we just need to implement two methods. Nevertheless, there are a lot of options we can play with to achieve more complex containers. We will explore beyond the typical Layout examples. There are some interesting topics I haven’t seen explained anywhere yet, so I will present them here. However, before we can dive into these areas, we need to begin by building a strong foundation.

  • The SwiftUI Layout Protocol – Part 2

    In the first part of this post we explored the basics of the Layout protocol in order to build a strong foundation of how Layout works. Now it’s time to dive into the less commented features and how to use them in our benefit.

  • About firmware updates for AirPods

    Learn about changes and features included in the firmware updates for your AirPods.

  • TIL: You Can Access A User’s Camera with Just HTML

    You can put the capture attribute on inputs with the type of file, and you can give it a value of “user” or “environment“.

    The interesting thing about the capture attribute is for users coming to your website on a mobile device. If they interact with that input, instead of opening up the default file picker, it will actually open up one of their cameras. It could be the front facing camera or the back facing camera, depending on the value.

    If you set the value to “user“, it will use the user facing or front facing camera and or microphone. And if you set it to “environment“, it will use the outer facing or back facing camera and or microphone.

  • Exploring SwiftUI Redraw Behavior with Instruments
    1. Be careful using the @ObservedObject in all your views, use it only when it is needed.
    2. It is not because is working that your code is optimal.
    3. While working with SwiftUI check what views are redrawing with Instruments and if all your redraws are intended.
  • Improving Composable Architecture performance

    We are always looking for ways to improve the performance of our Composable Architecture, and spurred by some fascinating recent discussions, we spent most of last week looking for performance wins in the library. This has all culminated in a new release, 0.40.0, which brings a number of improvements to the library, and best of all, most of the changes came from collaboration with people in the community! 🤗

  • How Much Does An Average App Development Cost In 2022?

    So, the question arises, how much does it cost to develop an app for my business?

    What should my budget be? It seems like it fluctuates all the time. To put things in perspective, a recent study by Clutch of 12 top app developers found that the cost to create a mobile app ranged from $30,000 to $700,000.

    Let us understand more about app development costs. It would help if you asked the right questions to fix your budget and start developing an app after you hire a developer!

  • Steve Jobs Archive

    I grow little of the food I eat, and of the little I do grow I did not breed or perfect the seeds.

    I do not make any of my own clothing.

    I speak a language I did not invent or refine.

    I did not discover the mathematics I use.

    I am protected by freedoms and laws I did not conceive of or legislate, and do not enforce or adjudicate.

    I am moved by music I did not create myself.

    When I needed medical attention, I was helpless to help myself survive.

    I did not invent the transistor, the microprocessor, object oriented programming, or most of the technology I work with.

    I love and admire my species, living and dead, and am totally dependent on them for my life and well being.

    Sent from my iPad

  • safeAreaAspectFitLayoutGuide

    A layout guide for placing content of a particular aspect ratio.

    This layout guide provides a centered region in the window where you can place media content of a particular aspect ratio (width over height) to avoid obscuring the content.

  • JSON Crack

    Seamlessly visualize your JSON data instantly into graphs.

  • Three UIKit Protips

    There are three patterns I use in most of my UIKit projects that I've never seen anyone else talk about. I think they help readability a lot, so I'm sharing them here:

    1. An addSubviews method to define your view hierarchy all at once
    2. An @AssignedOnce property wrapper
    3. A pattern for keeping view creation at the bottom of a file to keep the top clean
  • Using CoordinateSpace to draw over a SwiftUI List

    In UIKit, we would use UICoordinateSpace.convert(_,to:) or the older UIView.convert(_,to:) functions, and happily there's a SwiftUI equivalent in CoordinateSpace.

  • Create Live Activities With ActivityKit on iOS 16

    We will use SwiftUI and WidgetKit to create the user interface of the Live Activity. Live Activities works like Widget Extension and enables code sharing between your widgets and Live Activities.

  • Sharing cross-platform code in SwiftUI apps

    The biggest issue when working on a cross-platform SwiftUI app is when you need to drop into AppKit on macOS and UIKit on iOS. Often, the APIs that you need (because they are absent from SwiftUI) are simply entirely different. However, sometimes the APIs are nearly identical but just different enough to require branching into platform-specific code paths. A good example of this is UIPasteboard on iOS and NSPasteboard on macOS.

  • Xcode's refactoring options for async/await

    Automatically adopt async functions in your codebase with ease

  • Sourcery Swift Package command plugin

    In this article I will be covering what a Sourcery command plugin looks like, but I am already working on a part two where I will be creating a build tool plugin, which presents numerous interesting challenges.

August

  • GHCi List of commands

    Here is an exhaustive, annotated list of GHCi commands, somewhat divided by task. Within each section, commands are listed alphabetically.

    Some important ones are covered in more detail in their own lessons. Each command is linked to the page in this course that discusses it; you can either click through to find out about a particular command of interest, or keep reading through this series to get to it.

  • @ViewBuilder usage explained with code examples

    The @ViewBuilder attribute is one of the few result builders available for you to use in SwiftUI. You typically use it to create child views for a specific SwiftUI view in a readable way without having to use any return keywords.

  • Adjust the direction of focus-based navigation in SwiftUI

    When the user navigates through focusable views in our app with the tab key, the focus will move in the reading order: first from the leading edge to the trailing edge and then from top down. While this default behavior is right for many use cases, sometimes we need to customize and redirect the focus movement to fit our custom app design.

  • Responsive layout in SwiftUI with ViewThatFit

    Making SwiftUI views responsive usually involves a lot of GeometryReaders and if-else.

    In iOS 16, SwiftUI got a new view that makes it easier to create a responsive layout, ViewThatFits.

    1. ViewThatFits apply fixedSize() on each child view, starting from the top.
    2. If the child view ideal size is larger than the parent's proposed size, ViewThatFits evaluate the next child.
    3. Then it returns the first child that fits within the proposed size.
  • AppKit is Done

    Well, not like Carbon. Don’t be so dramatic!

    More like Core Foundation. It’s still there behind the scenes, but programmers use high-level Objective-C and Swift wrappers from Foundation. If something is missing, you can call an underlying C API. The relation between SwiftUI and AppKit is similar, for now.

  • Migrating to protocol reducers (TCA)

    Learn how to migrate existing applications to use the new ReducerProtocol, in both Swift 5.7 and Swift 5.6.

    Migrating an application that uses the Reducer type over to the new ReducerProtocol can be done slowly and incrementally. The library provides the tools to convert one reducer at a time, allowing you to plug protocol-style reducers into old-style reducers, and vice-versa.

    Although we recommend migrating your code when you have time, the newest version of the library is still 100% backwards compatible with all previous versions. The Reducer type is now "soft" deprecated, which means we consider it deprecated but you will not get any warnings about it. Some time in the future we will officially deprecate it, and then sometime even later we will remove it so that we can rename the protocol to Reducer.

  • The Composable Architecture Performance

    Learn how to improve the performance of features built in the Composable Architecture.

    As your features and application grow you may run into performance problems, such as reducers becoming slow to execute, SwiftUI view bodies executing more often than expected, and more.

    • View stores
    • CPU-intensive calculations
    • High-frequency actions
  • Stop Xcode 14 beta from draining your battery

    There's a bug in Xcode 14 betas 4-6 that causes a crash loop in the PosterBoard process when you run an iOS 16 iPhone simulator, making your computer's CPU usage go sky high and battery to drain very quickly. Here's a workaround until Apple resolves the issue.

  • How to Make Custom Test Assertions in Swift (2016)

    Here are the steps for creating specialized test assertions in Swift:

    • Define your assertion as a helper function.
    • Design the parameters to be unambiguous.
    • Include optional parameters for file and line.
    • Upon failure, call XCTFail, passing the file and line arguments.
    • Report all the information you need to diagnose failures.
    • Can you make the assertion generic?
  • How to bridge async/await functions to Combine's Future type in Swift

    Learn how to call async/await code within Combine based APIs.

  • withLock(_:)

    func withLock<R>(_ body: () throws -> R) rethrows -> R

  • Keeping a widget up to date efficiently on iOS

    • Make use of timelines
    • Find ways to refresh when appropriate
    • Make use of caching
  • The Best and Fastest Ways to Install Xcode on your Mac

    In this article, we will look at all of the alternative ways to install Xcode, how to speed up the process, and how to resolve disk space problems. We’ll also look at the Windows alternative to Xcode.

  • Structural identity in SwiftUI (2021)

    Structural identity is the type of identity that SwiftUI uses to understand your views without an explicit identifier by using your layout description. This week we will learn how to improve performance and eliminate unwanted animations by using inert view modifiers in SwiftUI.

  • You have to change mindset to use SwiftUI (2019)

    Last week I saw that the community tries to move UIKit development patterns to SwiftUI. But I’m sure that the best way to write efficient SwiftUI is to forget everything about UIKit and entirely change your mindset in terms of User Interface development. This week we will learn the main differences between UIKit and SwiftUI development.

  • Installing Swift

    The supported platforms for running Swift on the server and the ready-built tools packages are all hosted here on swift.org together with installation instructions. There’s also the language reference documentation section for viewing more information about Swift.

  • Build System

    The recommended way to build server applications is with Swift Package Manager. SwiftPM provides a cross-platform foundation for building Swift code and works nicely for having one code base that can be edited as well as run on many Swift platforms.

  • Testing

    SwiftPM is integrated with XCTest, Apple’s unit test framework. Running swift test from the terminal, or triggering the test action in your IDE (Xcode or similar), will run all of your XCTest test cases. Test results will be displayed in your IDE or printed out to the terminal.

  • Debugging Performance Issues

    First of all, it’s very important to make sure that you compiled your Swift code in release mode. The performance difference between debug and release builds is huge in Swift. You can compile your Swift code in release mode using swift build -c release.

  • Deploying to Servers or Public Cloud

    The following guides can help with the deployment to public cloud providers:

    • AWS on EC2
    • AWS on Fargate with Vapor and MongoDB Atlas
    • DigitalOcean
    • Heroku
    • Kubernetes & Docker
    • GCP
    • Have a guides for other popular public clouds like Azure? Add it here!
  • Packaging Applications for Deployment

    Once an application is built for production, it still needs to be packaged before it can be deployed to servers. There are several strategies for packaging Swift applications for deployment.

  • LLVM TSAN / ASAN

    For multithreaded and low-level unsafe interfacing server code, the ability to use LLVM’s ThreadSanitizer and AddressSanitizer can help troubleshoot invalid thread usage and invalid usage/access of memory.

  • Structs, Classes, and Actors in iOS Interviews

    We saw what are reference and value types, and what are the new actor types. Also, we described some reasons to use classes over structs and what is dynamic and static methods dispatch in Swift. We discussed thread safety using types in Swift and how you can expand your studies about them.

  • Conditional layouts in SwiftUI

    From the first day of the SwiftUI framework, we have primary layout containers like VStack, HStack, and ZStack. The current iteration of the SwiftUI framework brings another layout container allowing us to place views in a grid. But the most important addition was the Layout protocol that all layout containers conform to. It also allows us to build our super-custom layout containers from scratch. This week we will learn the basics of the Layout protocol in SwiftUI and how to build conditional layouts using AnyLayout type.

  • The LDT, a Perfect Home for All Your Kernel Payloads

    The concepts presented here highlight several powerful generalized techniques for macOS kernel exploits on Intel-based systems. We demonstrated how the dblmap can substantially weaken the efficacy of KASLR, provide several interesting kernel call targets, host smuggled kernel shellcode, and more.

    These primitives were used in the practical exploitation of numerous kernel vulnerabilities we responsibly disclosed to Apple over the past year. Abusing core low-level constructs of the operating system can lead to very interesting consequences, and prove incredibly challenging to mitigate.

  • Suspicious Package

    An Application for Inspecting macOS Installer Packages

  • withUnsafeTemporaryAllocation(of:capacity:_:)

    Provides scoped access to a buffer pointer to memory of the specified type and with the specified capacity.

  • swiftinit

    Swiftinit is a collection of richly-linked high-level technical articles and tutorials related to the Swift programming language. Kelvin Ma started Swiftinit in late 2021 when he and a few professional Swift developers realized that educational resources for the Swift language were often scattered across personal blogs or buried deep in Github repositories, making it hard to beginners to get started with the language.

  • View Controller Presentation Changes in iOS and iPadOS 16

    In iOS/iPadOS 16.0 there have been a few minor and one significant change in behaviour when presenting modal view controllers:

    • when the presenting view controller is in a regular-width environment on iPad, form sheets are slightly bigger than on previous iPadOS versions. This changed in beta 4. (If the presenting view has compact width, a form sheet presentation will adapt and fill the width, just like on iPhone.)
    • the height of the navigation bar in a non-full-screen, non-popover, modally-presented view controller is smaller than before (12 points smaller on iPhone and 6 points smaller on iPad). This change has only just occurred in beta 5. Many thanks to Jordan Hipwell for discovering this and bringing it to my attention. He also discovered this has not (yet?) changed in SwiftUI.
    • non-full-screen modally-presented double and triple column style split view controllers have a different appearance compared to iPadOS 13 to 15.
  • Achieving A Completely Open Source Implementation of Apple Code Signing and Notarization

    I'm very excited to announce that we now have a pure Rust implementation of a client for Apple's Notary API in the apple-codesign crate. This means we can now notarize Apple software from any machine where you can get the Rust crate to compile. This means we no longer have a dependency on the 3rd party Apple Transporter application. Notarization, like code signing, is 100% open source Rust code.

  • An Introduction to Plutus Core

    Plutus Core (PLC) is the programming language that “runs” on the Cardano Blockchain. A blockchain is just a distributed data structure though, so programs do not literally run on it. What we mean is that Plutus Core programs are stored on the blockchain in binary form and can be referenced by transactions. Plutus Core programs are later retrieved from the blockchain and executed by Cardano Nodes when required by other transactions that reference them.

    In this blog post, we give a high-level overview of the role that Plutus Core plays in the Cardano ecosystem, what programs written in Plutus Core look like, and how those programs work.

  • Stabilize, Modularize, Modernize: Scaling Slack’s Mobile Codebases

    The Stabilization phase of Project Duplo lasted six months. In this phase, we wanted to “stop the bleeding”, by addressing key elements of tech debt that were slowing development on each platform. We talked to our mobile developers about the issues they thought were the most important to address, used code health metrics to assess which files in the codebase were the “worst”, and tried to focus on a few key areas where we could make big impacts. For this phase, we had a core team of developers who were dedicated to working on Duplo, as well as leads for each platform. This core team worked together throughout the Stabilization phase, to ensure we had developers focused on the project (and not pulled off onto feature development).

  • Scaling Slack’s Mobile Codebases: Modularization

    We use the word module to describe a subproject — generally a static or dynamic framework linked into the app. Prior to Duplo, we had split off some of our infrastructure code into subprojects on both platforms, but most of the code was still in the main app target, and all feature development was happening there. During Duplo, modularization was a key focus of the project, and we made a concerted push to move code out of the app target.

  • Scaling Slack’s Mobile Codebases: Modernization

    In addition to modularizing our codebase as part of Duplo, we also wanted to improve our overall app architecture, ensure we were keeping up with industry trends, and adopt more forward-looking design patterns and technologies. On each platform, we decided on particular areas of focus which we thought would both improve the experience of building features for our mobile developers and put our mobile codebases on better footing.

  • Experimenting with Live Activities

    "These are my notes on playing with the API and implementing my first Live Activity."

  • How do 3D transforms of iOS views work under the hood?

    When it comes to transforming a view, one can think of it as applying a calculation to each individual point of the view’s layer, such that for every (x, y, z), we obtain a new (x', y', z'). That calculation is actually a multiplication of the coordinates (x, y, z) by a matrix (good ol' linear algebra). How we construct our matrix is through the use of various types of CATransform3Ds, which we’ll now dive into.

  • TIL: lldb po strongly captures an object, forever

    While investigating some memory leak issue, I found out that if I po an object before using Memory Graph, that object would stay in memory forever, and Memory Graph would show something like NSKeyValueDependencyInfo as an owner of the object.

    A leak will also happen when using p or expression.

  • KeyPathComparator

    A comparator that uses another sort comparator to provide the comparison of values at a key path.

  • Sort elements based on a property value using KeyPathComparator

    If we have an array of elements in Swift and we need to sort it based on a specific property, we can't use the simple sorted() method.

    Another way to approach it would be to use KeyPathComparator introduced in Foundation in iOS 15. We can pass the comparator created with a key path to a particular property to sorted(using:) method.

  • Sendable and @Sendable closures explained with code examples

    Sendable and @Sendable are part of the concurrency changes that arrived in Swift 5.5 and address a challenging problem of type-checking values passed between structured concurrency constructs and actor messages.

July

  • Monad Confusion and the Blurry Line Between Data and Computation

    There's a common joke that the rite of passage for every Haskell programmer is to write a "monad tutorial" blog post once they think they finally understand with how they work. There are enough of those posts out there, though, so I don't intend for this to be yet another monad tutorial. However, based on my learning experience, I do have some thoughts on why people seem to struggle so much with monads, and as a result, why so many of those tutorials exist.

    At a high level, the intuition for monads are that they are an abstraction of sequencing in programming. Any computation that involves "do this, and then do that using the previous result" can be considered monadic.

  • Common Swift Task Continuation Problem

    One thing that the Swift engineering community thought when they were developing the new asynchronous API is that in some manner they should support a bridging way to connect your old closure-based APIs with the new async/await world.

    And that is exactly what they did, the Swift team created the Continuation API. This creates a suspension point in your code and that is exactly what you need to use the new async/await semantics.

  • UI Design Difference between Android and IOS Apps

    The design brings excellent user/client experience for Android and iOS development. The two platforms have different explicit highlights in their UI/UX approach. Yet, both have predictable highlights that guarantee the user a better experience.

    But Apple, they try to have complete command over their items. It guarantees that the client has a reliable encounter with any of the gadgets of Apple’s. Apple takes more care of the design, UX, and exhibitions than different makers. But Google they have a platform that targets a significant part of accessible phones.

I’d like to highlight the UI differences between Android and iOS on various prospects.

  • ActivityKit

    Share live updates from your app as Live Activities on the Lock Screen.

    With the ActivityKit framework, you can start a Live Activity to share live updates from your app on the Lock Screen. For example, a sports app might allow the user to start a Live Activity for a live sports game. The Live Activity appears on the Lock Screen for the duration of the game and offers the latest updates about the game at a glance.

    In your app, you use ActivityKit to configure, start, update, and end the Live Activity, and your app’s widget extension uses SwiftUI and WidgetKit to create the user interface of the Live Activity. This makes the presentation code of a Live Activity similar to the widget code and enables code sharing between your widgets and Live Activities. However, Live Activities use a different mechanism to receive updates compared to widgets. Instead of using a timeline mechanism, Live Activities receive updated data from your app with ActivityKit or by receiving remote push notifications with the User Notifications framework.

  • Displaying live data on the Lock Screen with Live Activities

    Start a Live Activity that appears on the Lock Screen and update it with your app’s most current data.

    Live Activities display and update an app’s most current data on the iPhone Lock Screen. This allows people to see live information they care about the most at a glance. To offer Live Activities, add code to your existing widget extension or create a new widget extension if your app doesn’t already include one. Live Activities use WidgetKit functionality and SwiftUI for their user interface on the Lock Screen. ActivityKit’s role is to handle the life cycle of each Live Activity: You use its API to request, update, and end a Live Activity.

  • Activity

    The object you use to start, update, and end a Live Activity.

  • TYPE-SIGNATURE

    It's basically "Who Wants to Be a Millionaire?" — but with types.

  • NavigationSplitView

    A view that presents views in two or three columns, where selections in leading columns control presentations in subsequent columns.

    You create a navigation split view with two or three columns, and typically use it as the root view in a Scene. People choose one or more items in a leading column to display details about those items in subsequent columns.

  • Format Styles In Excruciating Detail

    Swift’s FormatStyle and ParseableFormatStyle are the easiest way to convert Foundation data types to and from localized strings. Unfortunately Apple hasn’t done a great job in documenting just what it can do, or how to use them.

  • LabeledContent

    A container for attaching a label to a value-bearing view.

  • Mastering LabeledContent in SwiftUI

    LabeledContent view is a simple view that composes a label and content. Usually, it displays the label on the leading edge and the content on the trailing edge. You can achieve similar behavior by inserting the label and content into the HStack and placing the Spacer view between them.

  • From Strings to Data Using ParsableFormatStyle

    The venerable (NS)Formatter class (and Apple’s various subclasses) are an Objective-C based API that is most well known as the go-to method for converting data types into strings. One of the lesser-known features of the APIs are that these same formatters can do the reverse: parse strings into their respective data types.

    Apple’s modern Swift replacement system for Formatter is a set of protocols: FormatStyle and ParseableFormatStyle. The former handles the conversion to strings, and the latter strings to data.

    FormatStyle and it’s various implementations is it’s own beast. Apple’s various implementations to support the built-in Foundation data types is quite extensive but spottily documented. I made a whole site to help you use them.

    But that’s not what we’re going to talk about today.

    Today we’re going to talk about ParseableFormatStyle and it’s implementations. How can we convert some strings into data?

  • Supporting FormatStyle & ParseableFormatStyle To Your Custom Types

    A full example of adding String and AttributedString output to our custom types, as well as adding the ability to parse String values into your custom type.

  • Formatting Your Own Types

    So you’ve read the gosh darn site and know how to get strings from data types.. Then you read the ParseableFormatStyle post and know how to parse strings into data. If your next thought was: “Now I want to do this with my own data types”, then this is for you.

  • Switching between SwiftUI’s HStack and VStack

    SwiftUI’s various stacks are some of the framework’s most fundamental layout tools, and enable us to define groups of views that are aligned either horizontally, vertically, or stacked in terms of depth.

struct DynamicStack<Content: View>: View {
    var horizontalAlignment = HorizontalAlignment.center
    var verticalAlignment = VerticalAlignment.center
    var spacing: CGFloat?
    @ViewBuilder var content: () -> Content

    var body: some View {
        currentLayout(content)
    }
}

private extension DynamicStack {
    var currentLayout: AnyLayout {
        switch sizeClass {
        case .regular, .none:
            return horizontalLayout
        case .compact:
            return verticalLayout
        @unknown default:
            return verticalLayout
        }
    }

    var horizontalLayout: AnyLayout {
        AnyLayout(HStack(
            alignment: verticalAlignment,
            spacing: spacing
        ))
    }

    var verticalLayout: AnyLayout {
        AnyLayout(VStack(
            alignment: horizontalAlignment,
            spacing: spacing
        ))
    }
}
import SwiftUI

/// Edit: I completely refactored the code. The previous implemenation is accessible through
/// revisions. Now:
/// ```
///  let path = NavigationPath()
///  let inspectable: NavigationPath.Inspectable = path.inspectable //or
///  let typedInspectable: NavigationPath.Inspectable.Of<Component>
///    = path.inspectable(of: Component.self)
/// ```
/// Both types are random-access and range replaceable collections. The first one of Any?, the
/// second of `Component`.
/// Both expose a `.navigationPath` property, so it is easy to construct a Binding like
/// ```
///  @State var path = NavigationPath().inspectable
///  NavigationStack(path: $path.navigationPath) { … }
/// ```
/// All of the following is enabled by our capacity to extract the last path component. It is maybe
/// possible to defer this function to SwiftUI by observing what's popping up in
/// `.navigationDestination`, but it is unlikely that we'll be able to make this work without
/// glitches/side-effects.
/// So for now, we are restricted to `NavigationPath`with `Codable` components only.
///
/// As an aside, I find very interesting that once you have `var count: Int`, `var last: Element?`,
/// `append(Element)` and `removeLast()` operations on a set of elements, you can turn it into a
/// mutable random access collection.

// MARK: - Common Helpers -
extension NavigationPath { // RandomAccessCollection-like
  var _startIndex: Int { 0 }
  var _endIndex: Int { count }
  
  /// We opt in for throwing functions instead of subscripts. This also makes room for an
  /// hypothetical `inout` cache argument.
  func get(at position: Int) throws -> Any {
    var copy = self
    copy.removeLast(count - (position + 1))
    return try copy.lastComponent!
  }
  
  mutating func set(_ newValue: Any, at position: Int) throws {
    // Auto-register the mangled type name
    registerValueForNavigationPathComponent(newValue)
    // We preserve the tail (position+1)...
    var tail = [Any]()
    while count > position + 1 {
      // Because `lastComponent == nil <=> isEmpty`, we can force-unwrap:
      tail.append(try lastComponent!)
      removeLast()
    }
    // Discard the one that will be replaced:
    if !isEmpty {
      removeLast()
    }
    // Double parenthesis are required by the current version of Swift
    // See https://github.com/apple/swift/issues/59985
    append((newValue as! any (Hashable & Codable)))
    // Restore the tail that was preserved:
    for preserved in tail.reversed() {
      append((preserved as! any (Hashable & Codable)))
    }
  }
}

extension NavigationPath { // RangeReplaceableCollection+MutableCollection-like
  mutating func _replaceSubrange<C>(_ subrange: Range<Int>, with newElements: C) throws
  where C : Collection, Any == C.Element {
    // Auto-register the mangled type name
    if let first = newElements.first {
      registerValueForNavigationPathComponent(first)
    }
    // We apply the same trick than for the index setter.
    var tail = [Any]()
    while count > subrange.upperBound {
      tail.append(try lastComponent!)
      removeLast()
    }
    // We don't need to preserve this part which will be replaced:
    while count > subrange.lowerBound {
      removeLast()
    }
    // Insert the new elements:
    for newValue in newElements {
      append((newValue as! any (Hashable & Codable)))
    }
    // Restore the preserved tail:
    for preserved in tail.reversed() {
      append((preserved as! any (Hashable & Codable)))
    }
  }
}

extension NavigationPath {
  public struct Inspectable: RandomAccessCollection, RangeReplaceableCollection, MutableCollection {
    
    public var navigationPath: NavigationPath
    
    public init(_ navigationPath: NavigationPath) {
      self.navigationPath = navigationPath
    }
    
    public init() {
      self.navigationPath = .init()
    }
    
    public var startIndex: Int { navigationPath._startIndex }
    public var endIndex: Int { navigationPath._endIndex }
    
    public subscript(position: Int) -> Any {
      get {
        do {
          return try navigationPath.get(at: position)
        } catch {
          NavigationPath.printExtractionError(error)
        }
      }
      set {
        do {
          try navigationPath.set(newValue, at: position)
        } catch {
          NavigationPath.printExtractionError(error)
        }
      }
    }
    
    public mutating func replaceSubrange<C>(_ subrange: Range<Int>, with newElements: C)
    where C : Collection, Any == C.Element {
      do {
        try navigationPath._replaceSubrange(subrange, with: newElements)
      } catch {
        NavigationPath.printExtractionError(error)
      }
    }
    /// A throwing version of `last`
    public var lastComponent: Any? {
      get throws { try navigationPath.lastComponent }
    }
  }
}

extension NavigationPath {
  /// Generates an inspectable representation of the current path.
  public var inspectable: Inspectable { .init(self) }
}

extension NavigationPath.Inspectable {
  public struct Of<Component>: RandomAccessCollection, RangeReplaceableCollection, MutableCollection
  where Component: Hashable, Component: Codable {
    
    public var navigationPath: NavigationPath
    
    public init(_ navigationPath: NavigationPath) {
      registerTypeForNavigationPathComponent(Component.self)
      self.navigationPath = navigationPath
    }
    
    public init() {
      registerTypeForNavigationPathComponent(Component.self)
      self.navigationPath = .init()
    }
    
    public var startIndex: Int { navigationPath._startIndex }
    public var endIndex: Int { navigationPath._endIndex }
    
    public subscript(position: Int) -> Component {
      get {
        do {
          return try navigationPath.get(at: position) as! Component
        } catch {
          NavigationPath.printExtractionError(error)
        }
      }
      set {
        do {
          try navigationPath.set(newValue, at: position)
        } catch {
          NavigationPath.printExtractionError(error)
        }
      }
    }
    
    public mutating func replaceSubrange<C>(_ subrange: Range<Int>, with newElements: C)
    where C : Collection, Component == C.Element {
      do {
        try navigationPath._replaceSubrange(subrange, with: newElements.map{ $0 as Any })
      } catch {
        NavigationPath.printExtractionError(error)
      }
    }
    
    /// A throwing version of `last`
    public var lastComponent: Component? {
      get throws { try navigationPath.lastComponent as? Component }
    }
  }
}

extension NavigationPath {
  /// Generates a typed inspectable representation of the current path.
  public func inspectable<Component>(of type: Component.Type)
  -> NavigationPath.Inspectable.Of<Component> {
    .init(self)
  }
}
// MARK: - Utilities
extension NavigationPath {
  public enum Error: Swift.Error {
    case nonInspectablePath
    case unableToFindMangledName(String)
  }
  /// This is not super efficient, but at least always in sync.
  var lastComponent: Any? {
    get throws {
      guard !isEmpty else { return nil }
      guard let codable else {
        throw Error.nonInspectablePath
      }
      return try JSONDecoder()
        .decode(_LastElementDecoder.self, from: JSONEncoder().encode(codable)).value
    }
  }
  
  static func printExtractionError(_ error: Swift.Error) -> Never {
    fatalError("Failed to extract `NavigationPath component: \(error)")
  }
  
  /// We use this type to decode the two first encoded components.
  private struct _LastElementDecoder: Decodable {
    var value: Any
    init(from decoder: Decoder) throws {
      var container = try decoder.unkeyedContainer()
      let typeName = try container.decode(String.self)
      typesRegisterLock.lock()
      let mangledTypeName = typeNameToMangled[typeName, default: typeName]
      typesRegisterLock.unlock()
      
      guard let type = _typeByName(mangledTypeName) as? (any Decodable.Type)
      else {
        typesRegisterLock.lock()
        defer { typesRegisterLock.unlock() }
        if typeNameToMangled[typeName] == nil {
          throw Error.unableToFindMangledName(typeName)
        }
        throw DecodingError.dataCorruptedError(
          in: container,
          debugDescription: "\(typeName) is not decodable."
        )
      }
      let encodedValue = try container.decode(String.self)
      self.value = try JSONDecoder().decode(type, from: Data(encodedValue.utf8))
    }
  }
}

/// `NavigationPath` codable representation is using `_typeName` instead of mangled names, likely
/// because it is intented to be serialized. But we need mangled names to respawn types using
/// `_typeByName`.
/// I don't know a way to find the mangled name from the type name. If one could generate a list
/// of mangled symbols, we can probably lookup. In the meantime, clients of `Inspectable` should
/// register types they intend to use as path components. This step is realized automatically for
/// `NavigationPath.Inspectable.Of<Component>`, and also automatically when editing the
/// `NavigationPath` using the inspector, but it needs to be performed manually if some
/// `NavigationPath` is deserialized.
///
/// In other words, registering is only required when deserializing an heterogenous
/// `NavigationPath` or an homogenous one with untyped inspection.

/// Register a type for inspection
public func registerTypeForNavigationPathComponent<T>(_ type: T.Type) {
  typesRegisterLock.lock()
  typeNameToMangled[_typeName(T.self)] = _mangledTypeName(T.self)
  typesRegisterLock.unlock()
}
// Register a type for inspection from any value of it
public func registerValueForNavigationPathComponent(_ value: Any) {
  let type = type(of: value)
  typesRegisterLock.lock()
  typeNameToMangled[_typeName(type)] = _mangledTypeName(type)
  typesRegisterLock.unlock()
}
private let typesRegisterLock = NSRecursiveLock()
private var typeNameToMangled = [String: String]()

// MARK: - Tests
func runPseudoTests() {
  do {
    // Check extracting the last component
    let path = NavigationPath([0,1,2,3,4,5,6,7,8,9])
    assert(path.inspectable.last as? Int == 9)
  }
  do {
    // Check extracting the nth component
    let path = NavigationPath([0,1,2,3,4,5,6,7,8,9])
    assert(path.inspectable[4] as? Int == 4)
  }
  do {
    // Check setting the nth component
    var path = NavigationPath([0,1,2,3,4,5,6,7,8,9]).inspectable
    path[4] = -1
    let expected = NavigationPath([0,1,2,3,-1,5,6,7,8,9])
    assert(path.navigationPath == expected)
  }
  
  do {
    // Check joining two paths
    let path = NavigationPath([0,1,2,3,4,5,6,7,8,9])
    let p1 = NavigationPath([0,1,2,3,4])
    let p2 = NavigationPath([5,6,7,8,9])
    let joinedPath = (p1.inspectable + p2.inspectable).navigationPath
    assert(path == joinedPath)
  }
  
  do {
    // Check editing a path "in the belly".
    var inspectable = NavigationPath([0,1,2,3,4,5,6,7,8,9]).inspectable
    inspectable.replaceSubrange(3..<6, with: [-1, -2])
    let expected = NavigationPath([0,1,2,-1,-2,6,7,8,9])
    assert(expected == inspectable.navigationPath)
  }
}

extension View {
  // Use this method in place of `navigationDestination` to automatically
  // register component types.
  func inspectableNavigationDestination<D: Hashable, Content: View>(for value: D.Type, destination: @escaping (D) -> Content) -> some View {
    registerTypeForNavigationPathComponent(D.self)
    return self.navigationDestination(for: value, destination: destination)
  }
}

// MARK: -
// Example: Navigation with two destination types and `NavigationPath`
// inpection and manipulation.

struct Destination: Hashable, Codable {
  var id: Int
  var title: String
}

struct AlternativeDestination: Hashable, Codable {
  var id: Int
  var title: String
}

struct ContentView: View {
  @State var path = NavigationPath().inspectable // A `NavigationPath.Inspectable` value
  @State var isModalPresented: Bool = false
  var body: some View {
    NavigationStack(path: $path.navigationPath) { // We can derive a "mapped" binding from @State
      VStack {
        Button {
          path.append(
            Destination(id: 2, title: "Screen #\(2)")
          )
        } label: {
          Label("Navigate to next", systemImage: "arrow.forward")
        }
        Button {
          let destinations = (2...5).map {
            Destination(id: $0, title: "Screen #\($0)")
          }
          path.append(contentsOf: destinations)
          
        } label: {
          Label("Navigate to \"5\"", systemImage: "arrow.forward")
        }
      }
      .navigationBarTitleDisplayMode(.inline)
      .navigationTitle("NavigationPath inspection")
      .inspectableNavigationDestination(for: Destination.self) {
        DestinationView(destination: $0, path: $path)
      }
      .inspectableNavigationDestination(for: AlternativeDestination.self) {
        AlternativeDestinationView(destination: $0, path: $path)
      }
    }
    .buttonStyle(.borderedProminent)
    .safeAreaInset(edge: .bottom) {
      lastComponentOverlay
    }
    .task {
      runPseudoTests()
    }
  }

  var lastComponentOverlay: some View {
    // We observe the current last element of the path, extracted from the inspectable path
    VStack(spacing: 8) {
      Text("Last element of path")
        .textCase(.uppercase)
        .foregroundStyle(.secondary)
      Text(path.last.map(String.init(describing:)) ?? "nil")
        .font(.footnote.monospaced()).fontWeight(.semibold)
        .frame(maxWidth: .infinity, alignment: .leading)
      if !path.isEmpty {
        Button {
          isModalPresented = true
        } label: {
          Text("Show NavigationPath")
        }
        .buttonStyle(.bordered)
      }
    }
    .font(.footnote)
    .frame(maxWidth: .infinity)
    .padding()
    .background(
      .ultraThinMaterial.shadow(.drop(radius: 6)),
      in: RoundedRectangle(cornerRadius: 11))
    .padding(.horizontal)
    .animation(.spring(dampingFraction: 0.7), value: (path.last as? Destination)?.id)
    .sheet(isPresented: $isModalPresented) {
      if path.isEmpty {
        VStack {
          Text("The path is empty")
          Button("Close") { isModalPresented = false }
        }
          .presentationDetents([.medium])
      } else {
        NavigationStack {
          List {
            ForEach(Array(zip(0..., path)), id: \.0) { index, value in
              HStack {
                Text("\(index)")
                Text(String(describing: value))
              }
            }
            .onDelete { offsets in
              path.remove(atOffsets: offsets)
            }
            // This is glitchy in SwifUI Previews
            .onMove { source, destination in
              path.move(fromOffsets: source, toOffset: destination)
            }
          }
          .safeAreaInset(edge: .bottom) {
            if path.count > 1 {
              Button {
                // Not animating unfortunately, likely by design for deep-linking
                withAnimation {
                  path.shuffle()
                }
              } label: {
                Label("Shuffle", systemImage: "dice")
                  .frame(maxWidth: .infinity)
                  .frame(minHeight: 33)
              }
              .buttonStyle(.borderedProminent)
              .padding(.horizontal)
            }
          }
          .environment(\.editMode, .constant(.active))
          .navigationTitle("NavigationPath")
          .navigationBarTitleDisplayMode(.inline)
        }
        .presentationDetents([.medium, .large])
      }
    }
  }
}

struct DestinationView: View {
  var destination: Destination
  @Binding var path: NavigationPath.Inspectable
  var body: some View {
    let nextDestination = Destination(
      id: destination.id + 1,
      title: "Screen #\(destination.id + 1)"
    )
    
    let nextAlternativeDestination = AlternativeDestination(
      id: destination.id + 1,
      title: "Alternative Screen #\(destination.id + 1)"
    )
    
    List {
      NavigationLink("Navigate to \(destination.id + 1)",
                     value: nextDestination)
      NavigationLink("Alternative destination \(destination.id + 1)",
                     value: nextAlternativeDestination)
    }
    .safeAreaInset(edge: .top) {
      HStack {
        Button {
          path.append(nextDestination)
        } label: {
          Label("Navigate to \(destination.id + 1)", systemImage: "arrow.forward")
        }

        if path.count > 1 {
          Button {
            withAnimation {
              path.shuffle()
            }
          } label: {
            Label("Shuffle", systemImage: "dice")
          }
        }
      }
    }
  .navigationTitle(destination.title)
  }
}

struct AlternativeDestinationView: View {
  var destination: AlternativeDestination
  @Binding var path: NavigationPath.Inspectable
  var body: some View {
    let nextDestination = Destination(
      id: destination.id + 1,
      title: "Screen #\(destination.id + 1)"
    )
    
    let nextAlternativeDestination = AlternativeDestination(
      id: destination.id + 1,
      title: "Alternative Screen #\(destination.id + 1)"
    )
    
    List {
      NavigationLink("Navigate to \(destination.id + 1)",
                     value: nextDestination)
      NavigationLink("Alternative destination \(destination.id + 1)",
                     value: nextAlternativeDestination)
    }
    .scrollContentBackground(Color.yellow)
    .safeAreaInset(edge: .top) {
      HStack {
        Button {
          path.append(nextDestination)
        } label: {
          Label("Navigate to \(destination.id + 1)", systemImage: "arrow.forward")
        }

        if path.count > 1 {
          Button {
            withAnimation {
              path.shuffle()
            }
          } label: {
            Label("Shuffle", systemImage: "dice")
          }
        }
      }
    }
  .navigationTitle(destination.title)
  }
}

struct ContentView_Previews: PreviewProvider {
  static var previews: some View {
    ContentView()
  }
}
  • Useful macOS command line commands

    Variously useful CLI commands such as downloading and creating USB installers

  • GenerateFake.sourcerytemplate

    Sourcery Template for Generating fakes

  • What's New in Xcode 14 Previews

    Xcode 14 brings a new look to the preview canvas. The pin control is now in the upper left corner and works as before allowing you navigate to different source files while pinning the preview in the canvas. Next to the pin control are the new page controls.

  • SwiftUI Renderers and Their Tricks

    Unlike most types in SwiftUI, ImageRenderer is not a struct, it is a class. And not just any class, it is an ObservableObject. That means it has a publisher you can subscribe to. All published events by the renderer, mean that the image changed.

  • NavigationPath

    A type-erased list of data representing the content of a navigation stack. You can manage the state of a NavigationStack by initializing the stack with a binding to a collection of data. The stack stores data items in the collection for each view on the stack. You also can read and write the collection to observe and alter the stack’s state.

    When a stack displays views that rely on only one kind of data, you can use a standard collection, like an array, to hold the data. If you need to present different kinds of data in a single stack, use a navigation path instead. The path uses type erasure so you can manage a collection of heterogeneous elements. The path also provides the usual collection controls for adding, counting, and removing data elements.

  • Swift language announcements from WWDC22

    wwdc22-swift-updates-sketch

  • Getting UIKit's UICalendarView from iOS 16 fully functioning in a SwiftUI app

    The new UICalendarView added to UIKit in iOS 16 looks great but there’s not a SwiftUI equivalent. Here’s how I got a SwiftUI app to show the calendar based on custom dates and update the calendar when dates change.

  • Swiftly

    Swift references for busy coders

  • SwiftUI Renderers and Their Tricks

    In the past, if we wanted to convert a SwiftUI view into an image we would wrap the view in a representable, and then use UIKit/AppKit to build our image. With the new renderers that is not longer necessary, but the approach is totally different and there is a whole set of considerations we need to make in order to be successful.

  • UIs Are Not Pure Functions of the Model (2018)

    The idea of UI being a pure function of the model seems so obviously incorrect, and leads to such a plethora of problems, that it is a bit puzzling how one could come up with it in the first place, and certainly how one would stick with it in face of the avalanche of problems that just keeps coming. A part of this is certainly the current unthinking infatuation with functional programming ideas. These ideas are broadly good, but not nearly as widely or universally applicable as some of their more starry-eyed proponents propose (I hesitate to use the word "think" in this context).

  • Functional UI

    View models and functional UI look like solutions, and they are indeed effective ways of managing complexity by making all the constituent state visible and enumerated. But in my experience they also encourage a way of programming where you bind as much as possible, and the problem with that is that, as the title of the linked post notes, UIs are not pure functions of the models.

  • ExtensionKit

    Create executable bundles to extend the functionality of other apps by presenting a user interface. Extensions are executable code bundles, in one app that perform functions in a second, host app. Host apps declare extension points that control the kinds of functionality its extensions can implement. Extensions allow iOS and Mac apps to include code that runs inside system apps. For example, Messages provides extension points so apps can create iMessage Apps. Messages automatically finds extension bundles that target its extension points and makes them available in its app drawer. A Mac app can also declare its own extension points so that other apps can extend the Mac app’s functionality.

  • Really structs ought to be implicitly indirected into a COW box after a certain size threshold
  • Large structs and stack overflow (code)

    Reducing stack costs of structs by using Copy on Write (CoW)

  • Large structs and stack overflow (forum)

    Short summary:

    • Move all stored properties of you struct to a new class called Storage.
    • Your struct now only stores an instance of this new class.
    • Add computed properties to your struct for each property which gets/sets the value on the class instance.
    • Before setting the value in your setter, check if the class instance has a reference count of 1 by using the isKnownUniquelyReferenced.
    • If it is not uniquely referenced, you need to copy your storage before setting the value.
    • That’s it.
  • dotSwift 2019 - Johannes Weiss - High-performance systems in Swift

    Languages that have a rather low barrier to entry often struggle when it comes to performance because too much is abstracted from the programmer to make things simple. Therefore in those languages, the key to unlock performance is often to write some of the code in C, collaterally abandoning the safety of the higher-level language.

    Swift on the other hand lets you unlock best of both worlds: performance and safety. Naturally not all Swift code is magically fast and just like everything else in programming performance requires constant learning.

    Johannes discusses one aspect of what was learned during SwiftNIO development. He debunks one particular performance-related myth that has been in the Swift community ever since, namely that classes are faster to pass to functions than structs.

  • Native Debuggers Command Map

    Below is a table of equivalent debugger commands for the GDB, LLDB, WinDbg (CDB), and HyperDbg debuggers.

June

  • My first contribution to Homebrew

    • New Formula: GOCR
  • Model View Controller Store: Reinventing MVC for SwiftUI with Boutique

    I've built a batteries-included Store that comes with everything you'll need out of the box called Boutique to be the foundation for that data. Boutique does no behind the scenes magic and doesn't resort to shenanigans like runtime hacking to achieve a great developer experience.

  • SwiftUI Index

    SwiftUI Changelog

  • SiriTipView

    A SwiftUI view that displays the phrase someone uses to invoke an App Shortcut.

    Use a SiriTipView to display the spoken phrase for the intent you specify. Include an instance of your intent when you create the view, and bind the view to a Boolean to handle the view’s presentation. The following example shows how to configure a button for a reorder intent and bind it to an isInserted variable.

  • WebAuthn — A better alternative for securing our sensitive information online

    The Web Authentication API (also known as WebAuthn) is a specification written by the W3C and FIDO, with the participation of Google, Mozilla, Microsoft, Yubico, and others. The API allows servers to register and authenticate users using public key cryptography instead of a password.

  • Mastering NavigationStack in SwiftUI. Navigator Pattern.

    SwiftUI is the declarative data-driven framework allowing us to build complex user interfaces by defining the data rendering on the screen. Navigation was the main pain point of the framework from the very first day. Fortunately, things have changed since WWDC 22, and SwiftUI provides the new data-driven Navigation API.

  • WWDC 22 Digital Lounge Archive (SwiftUI + Design)

    To help future us (and you!), we’ve copied every question/answer from the lounges of special interest to us: SwiftUI and design. I bet we’ll be referencing them throughout development, and we expect many others to do too. So many valuable insights and tips!

  • #HEXWORDS

    Why bother with a random green when you can choose to be a #BADA55!

  • App Clips Diagnostic Tool

    App Clip diagnostics checks App Clip experiences that use physical codes, Safari and iMessage, and it will also check your universal link associated domains configuration. This simple new tool makes it so much easier to get your configuration right.

  • Replace CAPTCHAs with Private Access Tokens

    Don't be captured by CAPTCHAs! Private Access Tokens are a powerful alternative that help you identify HTTP requests from legitimate devices and people without compromising their identity or personal information. We'll show you how your app and server can take advantage of this tool to add confidence to your online transactions and preserve privacy.

  • Eliminate data races using Swift Concurrency

    Join us as we explore one of the core concepts in Swift concurrency: isolation of tasks and actors. We'll take you through Swift's approach to eliminating data races and its effect on app architecture. We'll also discuss the importance of atomicity in your code, share the nuances of Sendable checking to maintain isolation, and revisit assumptions about ordering work in a concurrent system.

  • Efficiency awaits: Background tasks in SwiftUI

    Background Tasks help apps respond to system events and keep time-sensitive data up to date. Learn how you can use the SwiftUI Background Tasks API to handle tasks succinctly. We'll show you how to use Swift Concurrency to handle network responses, background refresh, and more — all while preserving performance and power.

  • Demystify parallelization in Xcode builds

    Learn how the Xcode build system extracts maximum parallelism from your builds. We'll explore how you can structure your project to improve build efficiency, take you through the process for resolving relationships between targets' build phases in Xcode, and share how you can take full advantage of available hardware resources when compiling in Swift. We'll also introduce you to Build Timeline — a powerful tool to help you monitor your build efficiency and performance.

  • Debug Swift debugging with LLDB

    Learn how you can set up complex Swift projects for debugging. We'll take you on a deep dive into the internals of LLDB and debug info. We'll also share best practices for complex scenarios such as debugging code built on build servers or code from custom build systems.

  • Resizable Sheet in SwiftUI

    Starting from iOS 16 we can present resizable sheets natively in SwiftUI. In this article we'll look into what we can achieve with the new APIs and what limitations they have in comparison with UIKit.

  • Design protocol interfaces in Swift

    Learn how you can use Swift 5.7 to design advanced abstractions using protocols. We'll show you how to use existential types, explore how you can separate implementation from interface with opaque result types, and share the same-type requirements that can help you identify and guarantee relationships between concrete types.

  • navigationDestination(for:destination:)

    Associates a destination view with a presented data type for use within a navigation stack.

  • About the security of passkeys

    Passkeys are a replacement for passwords. They are faster to sign in with, easier to use, and much more secure.

    Passkeys are a replacement for passwords that are designed to provide websites and apps a passwordless sign-in experience that is both more convenient and more secure. Passkeys are a standard-based technology that, unlike passwords, are resistant to phishing, are always strong, and are designed so that there are no shared secrets. They simplify account registration for apps and websites, are easy to use, and work across all of your Apple devices, and even non-Apple devices within physical proximity.

  • Compose custom layouts with SwiftUI

    SwiftUI now offers powerful tools to level up your layouts and arrange views for your app's interface. We'll introduce you to the Grid container, which helps you create highly customizable, two-dimensional layouts, and show you how you can use the Layout protocol to build your own containers with completely custom behavior. We'll also explore how you can create seamless animated transitions between your layout types, and share tips and best practices for creating great interfaces.

  • Bringing robust navigation structure to your SwiftUI app

    Use navigation links, stacks, destinations, and paths to provide a streamlined experience for all platforms, as well as behaviors such as deep linking and state restoration.

  • NavigationStack

    A view that displays a root view and enables you to present additional views over the root view.

    Use a navigation stack to present a stack of views over a root view. People can add views to the top of the stack by clicking or tapping a NavigationLink, and remove views using built-in, platform-appropriate controls, like a Back button or a swipe gesture. The stack always displays the most recently added view that hasn’t been removed, and doesn’t allow the root view to be removed.

  • Creating Lock Screen Widgets and Watch Complications

    Create accessory widgets that appear on the iPhone Lock Screen and as complications on Apple Watch.

    Starting with iOS 16 and watchOS 9, WidgetKit allows you to extend the reach of your app to the Lock Screen on iPhone and to the watch face as complications on Appl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment