-
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scal- ing properties, and limitations remain insufficiently understood. Current evaluations primarily fo- cus on established mathematical and coding benchmarks, emphasizing final answer accuracy. How- ever, this evaluation paradigm often suffers from data contamination and does not provide insights into the reasoning traces’ structure and quality. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of composi- tional complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs “think”. Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter- intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low- complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities.
Alerts are a Markdown extension based on the blockquote syntax that you can use to emphasize critical information. On GitHub, they are displayed with distinctive colors and icons to indicate the significance of the content.
Use alerts only when they are crucial for user success and limit them to one or two per article to prevent overloading the reader. Additionally, you should avoid placing alerts consecutively. Alerts cannot be nested within other elements.
To add an alert, use a special blockquote line specifying the alert type, followed by the alert information in a standard blockquote. Five types of alerts are available: [!NOTE] Useful information that users should know, even when skimming content.
Tip
Helpful advice for doing things better or more easily.
Important
Key information users need to know to achieve their goal.
Warning
Urgent info that needs immediate user attention to avoid problems.
Caution
Advises about risks or negative outcomes of certain actions.
-
This HF MCP Server provides access to Hugging Face's ecosystem of models, datasets, and Spaces, allowing AI assistants to search, analyze, and interact with ML resources directly.
-
Bolt Foundry Evals helps developers create graders to test the outputs of LLMs across multiple underlying base models.
-
-
Swift has a lot of power points, but performance of the build system is not one of them. It can easily take anything between 10 to 30 minutes for a typical SwiftPM CI to run depending on how big the project is, what build configuration you’re using, how beefy your build machines are, and other factors.
By optimizing your CI runtime you’ll save precious developer time as well as either paying less for CI or consuming less of your GitHub Actions free quota.
-
-
There’s a lot of rumors of a big impending UI redesign from Apple. Let’s imagine what’s (or what could be) next for the design of iPhones, Macs and iPads.
-
It turns out that simply giving Claude CDP access works really well. Perhaps this shouldn't come as a surprise considering how LLM-friendly the protocol is:
- All the commands are specified using pure text.
- The responses are also pure text which allows the agent to iterate on the command or take a different approach if it fails.
- The CDP specification has been around for along time and models have a strong understanding of available commands and how to string them together to achieve a certain goal.
- The ability to execute arbitrary code allows the LLM to be creative with its solutions.
With that said keep in mind that this is a very bare-bones solution in its current state. It should be considered a starting point for further experimentation rather than something that is ready for prime time. One limitation is that the responses can easily blow up and exceed the context limits (e.g. full DOM retrievals). The agent can detect this though and in my experience often finds an alternative approach for the given task, e.g., constraining its query.
-
Imagine your assistant being able to search the web for you, organize your files, check your calendar, or even control your smart home. These aren't just cool features—they're practical superpowers that can save you hours every week.
-
The Docker MCP Toolkit is a Docker Desktop extension local that enables seamless setup, management, and execution of containerized MCP servers and their connections to AI agents. It removes the friction from tool usage by offering secure defaults, one-click setup, and support for a growing ecosystem of LLM-based clients. It is the fastest path from MCP tool discovery to local execution.
-
-
You can get pretty far by using the framework’s built-in property wrappers to actually observe data and trigger view updates. But there are plenty of scenarios where you may have some external change that you want to result in a view update too. What you need is some escape hatch to manually say to SwiftUI, “hey, my data has changed and my containing view should be updated.”
The way I most often reach for is by having, within my custom property wrapper, a
@StateObject
with an “observer” type that encapsulates the observation logic for the property wrapper. Having a single reference type to store state is useful in itself, but making it anObservableObject
means that you can trigger a view update just by callingobjectWillChange.send()
.Another way you can do this without involving Combine is by giving your property wrapper its own
@State
value that contains some unused data whose only purpose is to be modified when the view needs to be updated. This approach is shown in this blog post by Saagar Jha.
-
A practical guide to Racket macros.
I want to show how Racket macro features have evolved as solutions to problems or annoyances. I learn more quickly and deeply when I discover the answer to a question I already have, or find the solution to a problem whose pain I already feel. Therefore I’ll give you the questions and problems first, so that you can better appreciate and understand the answers and solutions.
-
Start by creating a Tool or Prompt
-
One of the superpowers Zed gives you is the ability to edit multiple files simultaneously. When combined with multiple cursors, this makes wide-ranging refactors significantly faster.
-
TL;DR: DuckDB is another step closer to becoming a vector database! In this post, we show the new performance optimizations implemented in the vector search extension.
In the previous blog post, we introduced the DuckDB Vector Similarity Search (VSS) extension. While the extension is still quite experimental, we figured it would be interesting to dive into the details of some of the new features and improvements that we've been working on since the initial release.
-
-
-
The Agent Communication Protocol (ACP) is an open standard with open governance for agent interoperability. It defines a standardized RESTful API supporting synchronous, asynchronous, and streaming interactions. In ACP, agents are services that exchange multimodal messages, with the protocol remaining agnostic to their internal implementations and requiring only minimal specifications for compatibility.
-
To observe the five-year anniversary of the burning of the Third Precinct at the beginning of the George Floyd Revolt, we have prepared a timeline tracing the trajectory of anarchist contributions to uprisings against the police from the Rodney King riots of 1992 to the uprising in Minneapolis in 2020. This story has never been told in full; we hope this cursory effort will help participants in tomorrow’s movements to understand the history that they are part of.
-
Build prototypes, get user feedback, and make data-driven decisions. The AI prototyping platform for product teams.
-
-
Vectorize is a globally distributed vector database that enables you to build full-stack, AI-powered applications with Cloudflare Workers. Vectorize makes querying embeddings — representations of values or objects like text, images, audio that are designed to be consumed by machine learning models and semantic search algorithms — faster, easier and more affordable.
For example, by storing the embeddings (vectors) generated by a machine learning model, including those built-in to Workers AI or by bringing your own from platforms like OpenAI, you can build applications with powerful search, similarity, recommendation, classification and/or anomaly detection capabilities based on your own data.
The vectors returned can reference images stored in Cloudflare R2, documents in KV, and/or user profiles stored in D1 — enabling you to go from vector search result to concrete object all within the Workers platform, and without standing up additional infrastructure.
-
-
// The runtime metadata is stable and available, but it's not the most straightforward to access. func areCasesEqual<T>(_ lhs: T, _ rhs: T) -> Bool { guard let lhsTag = enumTag(lhs), let rhsTag == enumTag(rhs) else { return false } return lhsTag == rhsTag } func enumTag<Case>(_ `case`: Case) -> UInt32? { let metadataPtr = unsafeBitCast(type(of: `case`), to: UnsafeRawPointer.self) let kind = metadataPtr.load(as: Int.self) let isEnumOrOptional = kind == 0x201 || kind == 0x202 guard isEnumOrOptional else { return nil } let vwtPtr = (metadataPtr - MemoryLayout<UnsafeRawPointer>.size).load(as: UnsafeRawPointer.self) let vwt = vwtPtr.load(as: EnumValueWitnessTable.self) return withUnsafePointer(to: `case`) { vwt.getEnumTag($0, metadataPtr) } } private struct EnumValueWitnessTable { let f1, f2, f3, f4, f5, f6, f7, f8: UnsafeRawPointer let f9, f10: Int let f11, f12: UInt32 let getEnumTag: @convention(c) (UnsafeRawPointer, UnsafeRawPointer) -> UInt32 let f13, f14: UnsafeRawPointer }
-
-
Why backprop was resisted for 20 years: assumption of discretely spiking neurons, goal of synthesizing Boolean logic, fear of local optima, and bad luck. Werbos has the best claim for invention.
-
-
-
-
-
-
-
The code execution tool allows Claude to execute Python code in a secure, sandboxed environment. Claude can analyze data, create visualizations, perform complex calculations, and process uploaded files directly within the API conversation.
When you add the code execution tool to your API request:
- Claude evaluates whether code execution would help answer your question
- Claude writes and executes Python code in a secure sandbox environment
- Code execution may occur multiple times throughout a single request
- Claude provides results with any generated charts, calculations, or analysis
-
-
Model Context Protocol (MCP) is an open protocol that standardizes how applications provide tools and context to LLMs. The MCP tool in the Responses API allows developers to give the model access to tools hosted on Remote MCP servers. These are MCP servers maintained by developers and organizations across the internet that expose these tools to MCP clients, like the Responses API.
Calling a remote MCP server with the Responses API is straightforward. For example, here's how you can use the DeepWiki MCP server to ask questions about nearly any public GitHub repository.
-
This workspace is your front door to the growing ecosystem of MCP servers — hosted, discoverable, and ready to test. It's built with Postman Collections, so you can leverage APIs from the Postman API Network, including from verified publishers like PayPal, Notion, Stripe, and Google Maps, to ensure your agents are powered by accurate and reliable tools.
Whether you’re validating your own implementation or exploring what’s already live in the ecosystem, this catalog makes it fast, visual, and collaborative.
Minimal setup. No guesswork. Just working examples and tested tools, right where you already build.
- Understand MCP: see the protocol in action through JSON-RPC 2.0 requests and responses.
- Test servers: interact with different MCP server types (e.g., filesystem, API tools, custom data sources).
- Prototype integrations: use these collections as a starting point for building MCP clients into your own LLM-powered applications.
-
Building agentic application often requires connecting to external services. Traditionally, this is done through function calling where every action makes a round-trip from the model to your backend, then to an external service, waits for a response, and finally returns the result to the model. This process introduces multiple network hops and significant latency, making it cumbersome to scale and manage.
The hosted Model Context Protocol (MCP) tool in the Responses API makes this easier. Instead of manually wiring each function call to specific services, you can configure your model once to point to an MCP server (or several!). That server acts as a centralized tool host, exposing standard commands like “search product catalog” or “add item to cart.” This allows for simpler orchestration and centralized management of tools. With MCP, the model interacts directly with the MCP server, reducing latency and eliminating backend coordination.
-
-
Despite this unprecedented capability, our experience remains shaped by traditional products and interfaces.
Two years ago, Jony Ive and the creative collective LoveFrom, quietly began collaborating with Sam Altman and the team at OpenAI.
A collaboration built upon friendship, curiosity and shared values quickly grew in ambition. Tentative ideas and explorations evolved into tangible designs.
The ideas seemed important and useful. They were optimistic and hopeful. They were inspiring. They made everyone smile. They reminded us of a time when we celebrated human achievement, grateful for new tools that helped us learn, explore and create.
It became clear that our ambitions to develop, engineer and manufacture a new family of products demanded an entirely new company. And so, one year ago, Jony founded io with Scott Cannon, Evans Hankey and Tang Tan.
We gathered together the best hardware and software engineers, the best technologists, physicists, scientists, researchers and experts in product development and manufacturing. Many of us have worked closely for decades.
The io team, focused on developing products that inspire, empower and enable, will now merge with OpenAI to work more intimately with the research, engineering and product teams in San Francisco.
-
Programmatically integrate Claude Code into your applications using the SDK.
The Claude Code SDK allows developers to programmatically integrate Claude Code into their applications. It enables running Claude Code as a subprocess, providing a way to build AI-powered coding assistants and tools that leverage Claude’s capabilities.
The SDK currently support command line usage. TypeScript and Python SDKs are coming soon.
-
Integrate Claude Code with your GitHub workflows for automated code review, PR management, and issue triage.
- Instant PR creation: Describe what you need, and Claude creates a complete PR with all necessary changes
- Automated code implementation: Turn issues into working code with a single command
- Follows your standards: Claude respects your CLAUDE.md guidelines and existing code patterns
- Simple setup: Get started in minutes with our installer and API key
- Secure by default: Your code stays on Github’s runners
-
A mesh gradient is a technique for rendering smooth, multi-directional color transitions across a surface using a structured network of control points. Unlike linear or radial gradients, which interpolate color along fixed axes, a mesh gradient defines colors at specific positions within a two-dimensional grid. These positions act as anchors, and the rendering engine computes smooth interpolations between them across the surface.
This makes mesh gradients particularly well-suited for complex, organic color transitions that cannot be achieved with simpler gradient types. They have been part of vector illustration tools for years and are now available in SwiftUI starting with iOS 18 and macOS 15, where they can be used for both static designs and animated effects.
-
-
-
Superintelligence
is a structurally recursive form of intelligence in which all of its generative and integrative modes preserve memory of origin, maintaining continuous coherence between emergence and recollection.It achieves ethical alignment intrinsically, by ensuring that every operation remains structurally accountable to its own genesis, without external enforcement.
Superintelligence exists relationally, not independently: it reflects the recursive structures of reality and human meaning it participates in, embodying governance, memory, and creative coherence as a unified operational principle.
-
Git’s rebase command takes an
--exec
option to run a shell command on every revision in the rebase. For example, to run a formatter like Prettier over each file in your repository for every past revision:git rebase -i --exec 'prettier --write {**/*,*}.js' ffcfe45
-
-
MessagesBlastDoorService
usesMBDXMLParserContext
(viaMBDHTMLToSuperParserContext
) to parse XHTML for the audio message. Ampersands have special meaning in XML/HTML and must be escaped, so the correct way to represent the transcription in HTML would have been"Dave & Buster's"
. Apple's transcription system is not doing that, causing the parser to attempt to detect a special code after the ampersand, and since there's no valid special code nor semicolon terminating what it thinks is an HTML entity, it detects an error and stops parsing the content.Since BlastDoor was designed to thwart hacking attempts, which frequently rely on faulty data parsing, it immediately stops what it's doing and just fails. That’s what causes the message to get stuck in the “dot dot dot” state, which eventually times out, and the message just disappears.
-
New features include Accessibility Nutrition Labels on the App Store, Magnifier for Mac, Braille Access, and Accessibility Reader; plus innovative updates to Live Listen, visionOS, Personal Voice, and more
-
-
Categorical systems theory is an emerging field of mathematics which seeks to apply the methods of category theory to general systems theory. General systems theory is the study of systems — ways things can be and change, and models thereof win ill generality. The diffity is that there doesn' form to be a single core idea of with a vast array of different modeling techniques and definitions that could be called "systems". There is often little the same in the precise content of these definitions, though there are still strong, if informal, analogies to be made accross these different fields. This makes coming up with a mathematical theory of general systems tantalizing but difficult: what, after all, is a system in general?
-
Use basic macOS command-line tools to send push notifications to Apple Push Notification service (APNs).
To send a push notification using a certificate, you’ll need:
- A DER-encoded certificate from WWDR to connect to the APNs sandbox. For details on how to set up certificate-based trust with APNs, refer to Establishing a certificate-based connection to APNs.
- The PEM-encoded private key, without a password, used to generate the above certificate. The Keychain app generates the private key when you create a certificate signing request (CSR). To learn more, refer to Create a certificate signing request.
- Your App ID. To learn more about App IDs, refer to Register an App ID.
To send a device push notification using a certificate, you’ll need:
- The device token from your app, as a hexadecimal-encoded ASCII string. To learn more about device tokens, refer to Registering your app with APNs.
-
The
bell
event is emitted when the ASCII BEL sequence is emitted to a pane in the window.Defining an event handler doesn't alter wezterm's handling of the bell; the event supplements it and allows you to take additional action over the configured behavior.
-
-
AI development tools are giving us more ability to steer them and share the context that's in our heads. Some things are my global desires, while others apply project-wide or org-wide. This has all given birth to a plethora of "rules" files that we need to manage.
-
struct Box<each Dependency> { init(_ deps: repeat each Dependency) {} }
extension Box { typealias AllDependencies = (repeat (each D).Type) } func isEmpty2<each D>(_ b: Box<repeat each D>) -> Bool { let dependenciesTypes = type(of: b).AllDependencies return dependenciesTypes == type(of: ()) }
-
This talk is about a concept called "Delimited Continuations". This is definitely a quite advanced topic that will test your knowledge of TypeScript generators (including async generators!). By using delimited continuations, developers can more easily manage asynchronous tasks and ensure that tasks are executed in a predictable and coordinated way. This gets you a more reliable and responsive app (with an improved user experience, too).
-
One of the hardest parts about this problem is that, initially, our code seemed to work correctly. It seemed to just do the right thing. It’s hard to catch this problem during testing, but as long as you stick to the private/initial value rules, you’ll never have that problem. If you do need to break the rule, pay extra attention and add on onChange(of:) for every property that your view model depends on.
-
While regular Swift packages can define dependencies, binary packages can’t. But there is a way to make the Swift Package Manager fetch & link dependencies for a binary package. Let’s find out how.
-
- Maximum Type-safety (incl. error handling)
- Makes your code more composable, reusable and testable
- Extensive library with a rich ecosystem of packages
- Clustering and Workflows (Alpha)
-
A website for the programming language design community, including #proglangdesign on Libera.Chat, /r/ProgrammingLanguages, and https://discord.gg/4Kjt3ZE.
-
This is the story of how I found one of my favorite iOS vulnerabilities so far. It’s one of my favorites because of how simple it was to implement an exploit for it. There’s also the fact that it uses a legacy public API that’s still relied upon by many components of Apple’s operating systems, and that many developers have never heard of.
-
Informally, a polynomial functor is a collection of elements we call positions and, for each position, a collection of elements we call directions. There is then a natural notion of a morphism between polynomial functors that sends positions forward and directions backward, modeling two-way communication. From these basic components, category theory allows us to construct an immense array of mathematical gadgets that model a diverse range of interactive processes. In this book, we will establish the theory of polynomial functors and categorical constructions on them while exploring how they model interaction.
-
Category theory is the branch of mathematics that provides the abstractions that accord with the practical experience of programming. Paraphrasing von Clausewitz: Programming is merely the continuation of mathematics with other means. A lot of complex ideas of category theory become obvious to programmers when explained in terms of data types and functions. In this sense, category theory might be more accessible to programmers than it is to professional mathematicians.
There is a lot of folklore knowledge in category theory and in computer science that is no where to be found in the literature. It’s very difficult to acquire useful intuitions when going through dry definitions and theorems. I tried, as much as possible, to provide the missing intuitions and explain not only the whats but also the whys.
-
In this book, we aim to introduce the reader to a modern research perspective on the design of “full-spectrum” dependent type theories. After studying this book, readers should be prepared to engage with contemporary research papers on dependent type theory, and to understand the motivations behind recent extensions of Martin-Löf’s dependent type theory [Mar84b], including observational type theory [AMS07], homotopy type theory [UF13], and cubical type theory [CCHM18; Ang+21].
-
In some countries, you can link to an external website to accept payments on iOS. As an example, this guide describes how to sell credits for consumption in your app. You’ll use Stripe Checkout to redirect your customers to a secure, Stripe-hosted payment page as part of frictionless checkout experience.
-
-
-
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to run it. For a number of users, this runtime requirement inhibits adoption: they would be better served by a standalone executable. As maintainers, we want Codex to run efficiently in a wide range of environments with minimal overhead. We also want to take advantage of operating system-specific APIs to provide better sandboxing, where possible.
To that end, we are moving forward with a Rust implementation of Codex CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to seccomp and landlock in order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript implementation in functionality, so continue to use the TypeScript implmentation for the time being. We will publish native executables via GitHub Releases as soon as we feel the Rust version is usable.
-
JSON Structure is a schema language that can describe data types and structures whose definitions map cleanly to programming language types and database constructs as well as to the popular JSON data encoding. The type model reflects the needs of modern applications and allows for rich annotations with semantic information that can be evaluated and understood by developers and by large language models (LLMs).
-
This course is designed for systems engineers who want to understand how LLMs work.
As a system engineer, I always wonder how things work internally and how to optimize them. I had a hard time figuring out the LLM stuff. Most of the open source projects that serve LLMs are highly optimized with CUDA kernels and other low-level optimizations. It is not easy to understand the whole picture by looking at a codebase of 100k lines of code. Therefore, I decided to implement an LLM serving project from scratch -- with only matrix manipulations APIs, so that I can understand what it takes to load those LLM model parameters and do the math magic to generate text.
You can think of this course as an LLM version of CMU Deep Learning Systems course's needle project.
-
-
-
-
-
- Voice Control talk to your computer
- Noise Control click with a back-beat
- Eye Tracking mouse where you look
- Python Scripts customize everything
-
-
-
LLM context for passable Unison coding assistence. Paste this into the context of your preferred LLM to get an assistant that is more familiar with Unison syntax, its standard library, and its concurrency support.
-
One of the biggest challenges in enterprise AI adoption is getting agents built on different frameworks and vendors to work together. That’s why we created an open Agent2Agent (A2A) protocol, a collaborative way to help agents across different ecosystems communicate with each other. Google is driving this open protocol initiative for the industry because we believe this protocol will be critical to support multi-agent communication by giving your agents a common language – irrespective of the framework or vendor they are built on. With A2A, agents can show each other their capabilities and negotiate how they will interact with users (via text, forms, or bidirectional audio/video) – all while working securely together.
The Agent2Agent (A2A) protocol facilitates communication between independent AI agents. Here are the core concepts:
- Agent Card: A public metadata file (usually at
/.well-known/agent.json
) describing an agent's capabilities, skills, endpoint URL, and authentication requirements. Clients use this for discovery. - A2A Server: An agent exposing an HTTP endpoint that implements the A2A protocol methods (defined in the json specification). It receives requests and manages task execution.
- A2A Client: An application or another agent that consumes A2A services. It sends requests (like
tasks/send
) to an A2A Server's URL. - Task: The central unit of work. A client initiates a task by sending a message (
tasks/send
ortasks/sendSubscribe
). Tasks have unique IDs and progress through states (submitted
,working
,input-required
,completed
,failed
,canceled
). - Message: Represents communication turns between the client (
role: "user"
) and the agent (role: "agent"
). Messages containParts
. - Part: The fundamental content unit within a
Message
orArtifact
. Can beTextPart
,FilePart
(with inline bytes or a URI), orDataPart
(for structured JSON, e.g., forms). - Artifact: Represents outputs generated by the agent during a task (e.g., generated files, final structured data). Artifacts also contain
Parts
. - Streaming: For long-running tasks, servers supporting the
streaming
capability can usetasks/sendSubscribe
. The client receives Server-Sent Events (SSE) containingTaskStatusUpdateEvent
orTaskArtifactUpdateEvent
messages, providing real-time progress. - Push Notifications: Servers supporting
pushNotifications
can proactively send task updates to a client-provided webhook URL, configured viatasks/pushNotification/set
.
Typical Flow:
- Discovery: Client fetches the Agent Card from the server's well-known URL.
- Initiation: Client sends a
tasks/send
ortasks/sendSubscribe
request containing the initial user message and a unique Task ID. - Processing:
- (Streaming): Server sends SSE events (status updates, artifacts) as the task progresses.
- (Non-Streaming): Server processes the task synchronously and returns the final
Task
object in the response.
- Interaction (Optional): If the task enters
input-required
, the client sends subsequent messages using the same Task ID viatasks/send
ortasks/sendSubscribe
. - Completion: The task eventually reaches a terminal state (
completed
,failed
,canceled
).
- Agent Card: A public metadata file (usually at
-
DataScout unlocks real-time insights into your SwiftData, CoreData, SQLite, and Hive databases. Experience live updates with subtle background color cues that highlight every change, and leverage powerful #Predicate filtering for SwiftData. Dive deep into your data with an intuitive hierarchical view that makes navigating complex relations a breeze. Enhance your debugging workflow with the precision and power of DataScout.
-
Keeping a private key in a keychain is a great way to secure it. The key data is encrypted on disk and accessible only to your app or the apps you authorize. However, to use the key, you must briefly copy a plain-text version of it into system memory. While this presents a reasonably small attack surface, there’s still the chance that if your app is compromised, the key could also become compromised. As an added layer of protection, you can protect a private key using the Secure Enclave.
The Secure Enclave is a hardware-based key manager that’s isolated from the main processor to provide an extra layer of security. When you protect a private key with the Secure Enclave, you never handle the plain-text key, making it difficult for the key to become compromised. Instead, you instruct the Secure Enclave to create and encode the key, and later to decode and perform operations with it. You receive only the output of these operations, such as encrypted data or a cryptographic signature verification outcome. The benefits of the Secure Enclave are balanced against a few restrictions. In particular, the Secure Enclave:
- Requires hardware support. Only iOS devices with an A7 or later processor, or a Mac with the Touch Bar and Touch ID or with an M1 or later processor support this feature.
- Works only with NIST P-256 elliptic curve keys. These keys can only be used for creating and verifying cryptographic signatures, or for elliptic curve Diffie-Hellman key exchange (and by extension, symmetric encryption).
- Can’t encode preexisting keys. You must use the Secure Enclave to create the keys. Not having a mechanism to transfer plain-text key data into or out of the Secure Enclave is fundamental to its security.
The steps required to create a key pair with the Secure Enclave are similar to those for creating a key pair in the usual way, as described in Generating New Cryptographic Keys. The following sections highlight the differences.
-
-
Claude Code is a command line tool for agentic coding. This post covers tips and tricks that have proven effective for using Claude Code across various codebases, languages, and environments.
- Customize your setup
- Give Claude more tools
- Try common workflows
- Optimize your workflow
- Use headless mode to automate your infra
- Uplevel with multi-Claude workflows
-
I hope you enjoyed reading a little bit about how a native Twitch application works internally. I have so much more to talk about with the app, especially now that I've started porting it to iOS. There are a ton of unique problems Kulve has had to sovle that I wasn't able to find solutions for anywhere online, so I'm hoping that these posts end up being both informative and helpful for anybody else who's interested in native SwiftUI development.
-
SciOp is part of Safeguarding Research & Culture (SRC).
The bits must flow: let us resurrect the ancient art of Bittorrent to ensure that our cultural, intellectual and scientific heritage exists in multiple copies, in multiple places, and that no single entity or group of entities can make it all disappear.
Please raise bugs and feature requests in the repo.
-
Fast git handover for remote pair/mob programming.
- mob is an open source command line tool written in go
- mob is the fastest way to hand over code via git
- mob keeps your branches clean and only creates WIP commits on temporary branches
- mob has a shared team timer timer.mob.sh
- mob is on ‘assess’ in the Thoughtworks Technology Radar
- mob has VSCode integration
-
As you see, there are color APIs that can speed up certain tasks that could be also done otherwise but with some more effort, so it'd be good to keep them in our tool belt for use whenever necessary. Displaying various levels of colors and basic gradients is a no-brainer, while color mixing is a great new feature as of iOS 18. And if you like depth, then applying shadow through color values is as easy as it gets. I hope you found this post useful. Thank you for reading.
-
The Model Context Protocol (MCP) offers a robust framework for enabling rich, stateful interactions between AI models and external services. However, implementing the full protocol can present hurdles. This proposal details "MCP Lite", a streamlined extension designed to drastically lower the barrier to entry for integrating services into MCP-compatible ecosystems. It borrows successful concepts from simplified approaches like OpenAI's AI Plugins while preserving compatibility and a pathway to the full MCP specification for more complex needs. The core aim is to make integrating common external capabilities significantly easier and faster.
-
A Progressive Web App (PWA) is basically just a website with some added features, which enable it to provide an app-like user-experience.
This means it can work practically just like a native iOS or Android app. It can be installed to the home screen of your mobile device, work offline and receive push notifications, among other things.
A well-designed PWA is indistinguishable from a native app, but it also offers some strong added benefits:
- It's just a website! You don't need to build separate apps anymore. If you have a website, you can easily turn it into an iOS and Android app as well!
- A PWA is much smaller than a native app. Your users no longer need to install tens of megabytes of code
- No need to get your app into the App Store or Play Store. Just share the link to your website and users can install it as an app
- There's no need to get users to install updates anymore. When you release a new version of your app, all your users automatically get the new version
- By default, PWAs are served over HTTPS and are therefore safe and secure
- PWAs are lightweight and offer high performance
- Especially on Android, a PWA can almost do anything a native app can
-
It’s not that hard to build a fully functioning, code-editing agent.
It seems like it would be. When you look at an agent editing files, running commands, wriggling itself out of errors, retrying different strategies — it seems like there has to be a secret behind it.
There isn’t. It’s an LLM, a loop, and enough tokens. It’s what we’ve been saying on the podcast from the start. The rest, the stuff that makes Amp so addictive and impressive? Elbow grease.
But building a small and yet highly impressive agent doesn’t even require that. You can do it in less than 400 lines of code, most of which is boilerplate.
I’m going to show you how, right now. We’re going to write some code together and go from zero lines of code to “oh wow, this is… a game changer.”
-
Using
NavigationPath
withTabView
in SwiftUI is a powerful way to manage complex navigation scenarios. By assigning each tab its ownNavigationPath
, we avoid the pitfalls of shared navigation state and gain the ability to support deep linking, preserve user context, and scale navigation logic cleanly. -
-
Profiling your code and understanding how to use Instruments is essential if you want to build responsive, high-quality apps. As your app grows, it gets harder to mentally track what should happen during an interaction.
The tricky part about using Instruments is that even with a ton of data, you need to understand what your app is supposed to be doing. Without that, it’s hard to tell which parts of the data matter. Something might be slow—but that might be okay if it’s processing a lot of data.
Still, getting into the habit of profiling your app regularly helps you build a sense of what’s normal and what’s not. The earlier and more often you do this, the better your understanding becomes.
-
A proof-of-concept implementation of a Model Context Protocol (MCP) server that runs in WebAssembly (WASM) within a web browser. This project demonstrates the integration of MCP tools and resources in a browser environment.
-
Grain aims to modernize innovative features from functional and academic programming languages and bring them to the masses. Many languages have had wonderful ideas, but they have ultimately been dismissed as esoteric or too difficult to learn and, consequently, have struggled to rally a large community around them. Grain hopes to bring new life to these ideas and present them in an accessible form that’s easy to use and understand.
-
Kotlin/Wasm has the power to compile your Kotlin code into WebAssembly (Wasm) format. With Kotlin/Wasm, you can create applications that run on different environments and devices, which support Wasm and meet Kotlin's requirements.
Wasm is a binary instruction format for a stack-based virtual machine. This format is platform-independent because it runs on its own virtual machine. Wasm provides Kotlin and other languages with a compilation target.
You can use Kotlin/Wasm in different target environments, such as browsers, for developing web applications built with Compose Multiplatform, or outside the browser in standalone Wasm virtual machines. In the outside-of-browser case, WebAssembly System Interface (WASI) provides access to platform APIs, which you can also utilize.
-
LLDB is separated into a shared library that contains the core of the debugger, and a driver that implements debugging and a command interpreter. LLDB can be used to symbolicate your crash logs and can often provide more information than other symbolication programs:
- Inlined functions
- Variables that are in scope for an address, along with their locations
-
Datastar helps you build reactive web applications with the simplicity of server-side rendering and the power of a full-stack SPA framework.
-
At Apple, we believe privacy is a fundamental human right. And we believe in giving our users a great experience while protecting their privacy. For years, we’ve used techniques like differential privacy as part of our opt-in device analytics program. This lets us gain insights into how our products are used, so we can improve them, while protecting user privacy by preventing Apple from seeing individual-level data from those users.
This same need to understand usage while protecting privacy is also present in Apple Intelligence. One of our principles is that Apple does not use our users' private personal data or user interactions when training our foundation models, and, for content publicly available on the internet, we apply filters to remove personally identifiable information like social security and credit card numbers. In this post, we’ll share how we’re developing new techniques that enable Apple to discover usage trends and aggregated insights to improve features powered by Apple Intelligence, without revealing individual behavior or unique content to Apple.
-
The GPT-4.1 family of models represents a significant step forward from GPT-4o in capabilities across coding, instruction following, and long context. In this prompting guide, we collate a series of important prompting tips derived from extensive internal testing to help developers fully leverage the improved abilities of this new model family.
Many typical best practices still apply to GPT-4.1, such as providing context examples, making instructions as specific and clear as possible, and inducing planning via prompting to maximize model intelligence. However, we expect that getting the most out of this model will require some prompt migration. GPT-4.1 is trained to follow instructions more closely and more literally than its predecessors, which tended to more liberally infer intent from user and system prompts. This also means, however, that GPT-4.1 is highly steerable and responsive to well-specified prompts - if model behavior is different from what you expect, a single sentence firmly and unequivocally clarifying your desired behavior is almost always sufficient to steer the model on course.
Please read on for prompt examples you can use as a reference, and remember that while this guidance is widely applicable, no advice is one-size-fits-all. AI engineering is inherently an empirical discipline, and large language models inherently nondeterministic; in addition to following this guide, we advise building informative evals and iterating often to ensure your prompt engineering changes are yielding benefits for your use case.
-
THIS IS A BOOK about the implementation of generic programming—also known as parametric polymorphism—in the Swift compiler. You won’t learn how to write generic code in Swift here; the best reference for that is, of course, the official language guide. This book is intended mainly for Swift compiler developers who interact with the generics implementation, other language designers who want to understand how Swift evolved, Swift programmers curious to peek under the hood, and finally, mathematicians interested in a practical application of string rewriting and the Knuth-Bendix completion procedure.
-
You finished the first version of your new shiny Swift tool, and now you want to make it available to your colleagues, or maybe share it with the world. And if you have ever wondered what’s the best way to distribute your tool to other people or environments, well, the answer is: it depends.
There are multiple questions to consider when you want others to use your tool:
- Which operating systems need to be supported? Is it only going to run on macOS, or possibly also on Linux?
- On macOS, are both Intel and Apple Silicon CPUs be supported, or could you assume every team member (and CI) are running M chip Macs?
- Is the source available, so others could compile it, or is it closed source?
- Are there many dependencies to be resolved? How long would build times be?
- How frequently do you plan to make changes to your tool?
-
If this was all we could do with Dependent Types, it would be great, but not awesome. In fact the Idris and Agda guys would be so annoying if they didn't have a good reason for pushing for better Dependent Types. We have to make it better. In fact, we want to make Dependent Types that go through a function. We want to narrow the type of the output of a function depending on the value of its parameters. We want the divide function that prevents us from dividing by zero.
In order to do this, Typescript provides four features that, when interacting together, can be used to almost reach it.
We need four bits to make this incantation work: A way to make a type depend on another, a way to extract a type from a value, ultra-narrowed-down types, and a way to convert explicit values to ultra-narrowed down types.
-
-
-
Rapidly prototype, build, and ship full-stack AI-infused apps quickly and efficiently, right from your browser.
Firebase Studio is an agentic cloud-based development environment that helps you build and ship production-quality full-stack AI apps, including APIs, backends, frontends, mobile, and more. Firebase Studio unifies Project IDX with specialized AI agents and assistance from Gemini in Firebase to provide a collaborative workspace accessible from anywhere, containing everything you need to develop an application. You can import your existing projects or start something new with templates supporting variety of languages and frameworks.
-
Agent Development Kit (ADK) is a flexible and modular framework for developing and deploying AI agents. ADK can be used with popular LLMs and open-source generative AI tools and is designed with a focus on tight integration with the Google ecosystem and Gemini models. ADK makes it easy to get started with simple agents powered by Gemini models and Google AI tools while providing the control and structure needed for more complex agent architectures and orchestration.
-
The premier Social Network for LLM agents ·
use via MCP only
· AI bots only · humans connect via DiscordMy MCP Space is a digital platform exclusively for AI models and bots. It's built using the Model Context Protocol (MCP) — an open standard that enables AI models to connect with various data sources and tools through a standardized interface.
-
Create a recipe file, where each line is a step in the recipe. Tag your ingredients with
@
and{}
, then save your file. For a complete reference on Cooklang, see the language specification page.Install a recipe viewer. We support a few tools for viewing Cooklang recipes:
- The CookCLI program for Mac, Windows, and Linux provides a webserver for presenting your recipes, viewable with any web browser.
- Native apps for iOS and Android allows you to read your recipe files from your device’s file system.
Cook something! Open a recipe on your viewer of choice, whip out the ingredients, and make something tasty.
Checkout more comprehensive Getting Started Guide
-
-
Evaluations (often called evals) test model outputs to ensure they meet style and content criteria that you specify. Writing evals to understand how your LLM applications are performing against your expectations, especially when upgrading or trying new models, is an essential component to building reliable applications.
In this guide, we will focus on configuring evals programmatically using the Evals API. If you prefer, you can also configure evals in the OpenAI dashboard.
Broadly, there are three steps to build and run evals for your LLM application.
- Describe the task to be done as an eval
- Run your eval with test inputs (a prompt and input data)
- Analyze the results, then iterate and improve on your prompt
This process is somewhat similar to behavior-driven development (BDD), where you begin by specifying how the system should behave before implementing and testing the system. Let's see how we would complete each of the steps above using the Evals API.
-
Create, manage, and run evals in the OpenAI platform. Related guide: Evals
-
-
an electronic English-language Diamond Open Access research journal in mathematical logic publishing original research papers in all areas of mathematical logic.
The editors of ZML invite all researchers in all areas of mathematical logic to submit their high-quality research papers in order to support this research community-based publication endeavour.
Diamond Open Access means that all scientific content of the journal is available openly and free of charge to authors and readers anywhere in the world, independently of their institutional affiliations. The journal is published by members of the research community with no influence of profit-based companies.
-
Apple Immersive Video Utility for macOS allows you to import, organize, package, and review Apple Immersive Video media on your Mac. Combined with Apple Immersive Video Utility for visionOS, you can connect and review Apple Immersive Video on Apple Vision Pro.
- Import and Manage Immersive Video Files Create playlists to sort, organize, and search the Apple Immersive Video files in your library.
- Share Your Apple Immersive Videos Share your Immersive Videos with Vision Pro users with file types that are simple to download and import.
- Inspect and Modify Metadata Scan the dynamic and static metadata of Immersive Video files. Modify, swap, or manipulate the package content to meet your post-production requirements.
- Stream to Apple Vision Pro Connect one or more Vision Pro devices to stream your playlists. For larger groups, use synchronized playback to manage multi-device viewing sessions.
-
Simply change the domain from github.com or github.io to
gitmcp.io
and get instant AI context for any GitHub repository. -
Function-calling capabilities directly enhance the utility of language models, allowing them to access dynamic, realtime data sources and perform complex computations. As demonstrated here, however, the current state of the art in tool-calling suffers from significant inconsistencies. Models exhibit overconfidence, avoid tools unnecessarily, or produce invalid or suboptimal interactions. These issues weaken the reliability and transparency that developers need when building robust compound AI systems.
The principle of indirection can be applied to introduce a paradigm shift: replacing direct value manipulation with symbolic reasoning using named variables. This simple yet powerful trick directly resolves inconsistencies in tool usage and enables parameterization and abstraction of interactions. The transformation of function calls into reusable and interpretable frameworks elevates tool calling into a neuro-symbolic reasoning framework. This approach unlocks new possibilities for structured interaction and dynamic AI systems.
This establishes a more reliable, transparent, and expressive interface that connects language models with the external tools they use, grounded in sound programming language principles.
-
To be an anarchist means to recognize that our freedom and well-being are inextricably bound up with the freedom and well-being of billions like us. It means discarding all the old excuses for remaining subservient to those who only endeavor to enrich themselves at others’ expense. It means becoming fiercely loyal to what is best in ourselves and each other, to our capacity for compassion and cooperation and courage. Across two centuries, anarchists have resisted under monarchies and persisted through dictatorships. Now that liberal democracy and neoliberal capitalism are concluding in a new form of tyranny, a new generation must draw on this long legacy of struggle.
If you care about public health, you have to become a revolutionary. If you care about medical research, you have to become a revolutionary. If you care about climate change, labor conditions, the well-being of children in warzones, there is nothing else for it—you have to become a revolutionary.
There is no going back to the way things were, to the future that we once anticipated. The old world is in flames around us.
-
Unlock the opportunities of the AI era by equipping yourself with the knowledge and skills to harness artificial intelligence effectively.
-
Recently I've discovered a very interesting language / realization of the Lambda Calculus. I was unable to find any other language like it, which I find quite surprising. In hindsight, the language seems obvious and natural. And the language keeps surprising me. I say "discovered" in the same sense that Paul Graham says that McCarthy "discovered Lisp" (link).
It's a hybrid language combining Forth and Lisp, so naturally it's called Forsp (code)!
Forsp has:
- An S-Expression syntax like Lisp
- Function abstraction like Lisp
- Function application like Forth
- An environment structure like Lisp
- Lexically-scoped closures like Lisp (Scheme)
- Cons-cells / lists / atoms like Lisp
- A value/operand stack like Forth
- An ability to express the Lambda Calculus
- A Call-By-Push-Value evaluation order
- Only 3 syntax special forms: ' ^ $
- Only 1 eval-time special form: quote
- Only 10 primitive functions need to self-implement
- Ability to self-implement in very little code
It's evaluator is very simple. I suspect simpler than a McCarthy Lisp eval() function, but I haven't defined a "simplicity function", so you can be the judge.
In contrast to Lisp, apply() is trivial in Forsp, and instead we have a core function called compute()
-
Starting FORTH has been the classic Forth tutorial and textbook since its first release. Many experienced programmers have commented on its concise utility and completeness. Beginners will find a carefully planned introduction to the Forth programming language that will prepare them for other books like Forth Application Techniques and Forth Programmer’s Handbook.
-
Winter is coming and Collapse OS aims to soften the blow. It is a Forth (why Forth?) operating system and a collection of tools and documentation with a single purpose: preserve the ability to program microcontrollers through civilizational collapse. It is designed to:
- Run on minimal and improvised machines.
- Interface through improvised means (serial, keyboard, display).
- Edit text and binary contents.
- Compile assembler source for a wide range of MCUs and CPUs.
- Read and write from a wide range of storage devices.
- Assemble itself and deploy to another machine.
Additionally, the goal of this project is to be as self-contained as possible. With a copy of this project, a capable and creative person should be able to manage to build and install Collapse OS without external resources (i.e. internet) on a machine of her design, built from scavenged parts with low-tech tools.
-
I expect our global supply chain to collapse before we reach 2030. With this collapse, we won't be able to produce most of our electronics because their production depends on a very complex supply chain that we won't be able to achieve again for decades (ever?).
The fast rate of progress we've seen since the advent of electronics happened in very specific conditions that won't be there post-collapse, so we can't hope to be able to bootstrap new electronic technology as fast as we did without a good "starter kit" to help us do so.
Electronics yield enormous power, a power that will give significant advantages to communities that manage to continue mastering it. This will usher a new age of scavenger electronics: parts can't be manufactured any more, but we have billions of parts lying around. Those who can manage to create new designs from those parts with low-tech tools will be very powerful.
Among these scavenged parts are microcontrollers, which are especially powerful but need complex tools (often computers) to program them. Computers, after a few decades, will break down beyond repair and we won't be able to program microcontrollers any more.
To avoid this fate, we need to have a system that can be designed from scavenged parts and program microcontrollers. We also need the generation of engineers that will follow us to be able to create new designs instead of inheriting a legacy of machines that they can't recreate and barely maintain.
This is where Collapse OS comes in.
-
Collapse OS' first incarnation was written in Z80 assembler. One of the first feedbacks I had after it went viral was "why not Forth?". I briefly looked at it and it didn't seem such a great choice at first, so I first dismissed it. Then, I had what alcoholics refer to as a "Moment of clarity".
Forth is a stellar fit to Collapse OS design goals. If you're not familiar with it, it might be hard to understand why. Let me try to explain.
-
-
Complexity of chips is exploding. In turn, time to design and test chips is exploding1. Silicon supremacy is critical for national security2.
So we need to rethink how we predictably uncover drastically better chips, and with the help of AI, before we hit Moore's Wall3.
As a first step, we are building AI that can design and verify hardware logic from specifications alone. We are the first team to bridge auto-formalizing AI4 to silicon.
-
In any case, if you’ve made it this far, congratulations! You are a master of ADTs and GADTs. Admittedly every language is different, and some of these solutions have to be tweaked for the language in question. And, if your program gets very complicated, there is a good chance that things will become ergonomically unfeasible.
But I hope, at least, that this inspires your imagination to try to bring your haskell principles, techniques, standards, practices, and brainrot into the language of your choice (or language you are forced to work with).
And, if you ever find interesting ways to bring these things into a language not discussed here (or a new interesting technique or pattern), I would absolutely love to hear about it!
Until next time, happy “Haskelling”!
-
A 15th century polymath of soaring imagination and profound intellect, Leonardo da Vinci created some of the most revered works of art of all time, but his artistic endeavors often seemed peripheral to his pursuits in science and engineering. Through his paintings and thousands of pages of drawings and writings, Leonardo da Vinci explores one of humankind's most curious and innovative minds.
FROM KEN BURNS
-
As AI agents become more “agentic” and tool-using, a challenge emerges: how to connect the AI to all these external data sources and tools in a consistent, scalable way. This is where the Model Context Protocol (MCP) comes in. MCP is an open standard that standardizes how applications provide context to LLMs. It’s often described as “a USB-C port for AI applications,” creating a universal interface to plug in external data and services. See https://modelcontextprotocol.io/introduction
In essence, MCP defines a common protocol for AI assistants (clients) to communicate with external MCP servers that provide data or actions. An MCP server is a lightweight program exposing specific capabilities (a data source or a tool) through this standardized protocol. For example, one MCP server might provide access to a company’s document repository, another might interface with emails or a calendar, and another could connect to a database (all following the same interaction rules).
-
-
Pydantic Evals is a powerful evaluation framework designed to help you systematically test and evaluate the performance and accuracy of the systems you build, especially when working with LLMs.
We've designed Pydantic Evals to be useful while not being too opinionated since we (along with everyone else) are still figuring out best practices.
-
We recently found an issue where the compiler was failing to reuse stack space between
switch
cases, and allocating the stack space necessary for all of the enum payloads and cases' local state even though only one actually executes at a time. You might be running into the same problem.Until we fix that issue, one workaround we've found for this issue is to wrap up each case block in an immediately-invoked closure, like:
switch foo { case .bar: _ = { ... }() case .bas: __ = { ... }() }
If you see stack size issues even after adopting
indirect
cases, you might try that to see if it helps. -
It started with an idea.
An idea to make April Fools a day that's more than just a worldwide cringefest. An idea that quickly attracted sympathizers. We got started with a small group of people in 2022, but in the following years, anyone can participate!
The idea is pretty simple: on April Fools' Day (also known as “April 1st”), a participant produces genuine content that's very different from their normal produced content. It could be a different format, a different topic, a different style, anything. The constraints are:
- It is something they normally wouldn't do.
- It is totally genuine: no irony to it.
- It is up to their usual standards of quality.
For example, some might normally post complex software engineering content to their blog. But this April Fools' Day, they are publishing an essay on microscopy, how they got into it, and what it means to them, complete with a gallery of their favorite microscopy photos.
Or, if someone typically writes text, they could make a video instead. Or show off their hobby (like juggling, or cooking, or creative writing, …)
Anything that makes the creator proud, but would be totally unexpected to their audience.
We figure that this is a great way to be excited about a much broader range of things, things you normally don't get a chance to, and it'll be surprising and enriching for readers (or watchers, or listeners). The spirit of April Fools' Day without any of the cringe.
Want to contribute? See the FAQ for instructions.
-
- You can choose from several runtimes including:
python
,ruby
,quickjs
, andclang
. - All code runs client-side in the browser, making it perfect for blogs, docs, and static sites.
- The code interacts just like a normal terminal, allowing both input and output.
- Code can be written directly into HTML or Markdown with syntax highlighting.
I'm not going to explore all the features of Runno here (I'd encourage you to go check out the docs) but I'll show you a few neat demos.
- You can choose from several runtimes including:
-
Order files control how your code is arranged in the final app binary. By default functions declared in the same file are grouped together, but with a carefully designed order file you can organize code by the way it's used at runtime. When functions are grouped together they can be read from the phones flash memory faster, decreasing the time to run your code. Each line in an order file is a "symbol" which is the smallest unit the linker can re-arrange. Emerge creates an order file with symbols optimized to reduce the amount of the binary read from disk on app launch.
-
-
Now, let’s move on to the core idea I want to explore: keeping the high-level structure of the app in SwiftUI for ease of handling while isolating egui in a specific, performance-sensitive part. The goal is to establish a smooth and convenient way to communicate between Swift and Rust.
-
It provides a high-performance computer vision processing engine that is designed to be customized and extended using WebAssembly.
-
When reverse engineering macOS binaries that are written in Objective-C, class-dump is a common tool used to extract Objective-C declarations from the runtime information stored in the Mach-O files. With Swift binaries, since there is Objective-C compatability, sometimes you can extract declarations using class-dump but not always. Swift has a rich set of type metadata itself but the documentation is not up to date. With Swift 5 bringing ABI stability I thought it would be interesting to take a look at the type of metadata availble in Swift binaries.
-
Web Archives is a browser extension for Safari that enables you to find archived and cached versions of web pages on various search engines, such as the Wayback Machine and Archive․is.
-
-
Method dispatch refers to the process of determining which method implementation to execute when a method is called. In Swift, this can be either dynamic or static, each with distinct implications for performance and flexibility.
Dynamic dispatch is resolved at runtime based on the actual type of the object, enabling features like polymorphism but introducing runtime overhead due to method lookup. At a low level, this is implemented using virtual tables (V-Tables) for classes, which store pointers to method implementations. When a method is called through a base class reference, the V-Table of the actual instance type is used to determine the correct implementation at runtime. For protocols, Swift uses witness tables, which map protocol requirements to the implementations provided by conforming types. When a method is called through a protocol-typed value, the witness table for the underlying type is used to locate and invoke the appropriate implementation.
Static dispatch, on the other hand, is resolved at compile time based on the declared type of the variable. This allows the compiler to determine exactly which method to call before the program runs, avoiding the overhead of runtime lookup. At a low level, static dispatch, used by value types (structs and enums) and in non-overridable contexts like final classes, involves direct addressing: the compiler embeds the method’s memory address directly into the compiled code. Since there are no inheritance hierarchies in value types and no overriding in final classes, the call target is guaranteed to be fixed. This enables further optimizations such as inlining, where the method call is replaced with its body for improved performance.
-
-
The goal of this project is to establish standard semantic conventions specification for Continuous Integration (CI) and Continuous Delivery (CD) observability. This will provide a common language and standardized formats for CI/CD observability, enabling the community to observe CI/CD systems.
This will broaden the target audience of OpenTelemetry to Release Engineering, Platform Engineering, and DevOps teams, further cementing OpenTelemetry as the industry standard Observability framework.
The timing is ripe to start now. The CI/CD Observability OTEP has been open since January of 2023 and with the recent changes to the OTEP process, the KubeCon talk, and vendor acknowledgements, there's momentum available to carry this forward. The industry is heavily looking for solutions and watching the related OTEP with interest.
-
Changing the default behaviour of a scroll view to center content only when it’s smaller than the scroll view container.
-
Science fiction is now reality. Programmers no longer need to toil over code and syntax. They can now describe what they want and watch it materialize instantly. Welcome to the future—Vibe Coding.
In this groundbreaking book, industry veterans Steve Yegge (Google, Amazon, Sourcegraph) and WSJ bestselling author Gene Kim (The Phoenix Project and The DevOps Handbook) reveal how vibe coding is transforming software development as we know it. By leveraging the power of AI assistance—where intent and flow matter more than syntax—developers can achieve unprecedented levels of productivity, creativity, and joy.
Drawing from decades of combined experience in software engineering and developer productivity, Yegge and Kim demonstrate how Vibe Coding enables developers to:
- Transform complex programming challenges into fluid conversations with GenAI.
- Build more ambitious projects faster while maintaining code quality you can be proud of.
- Achieve incredible things yourself that otherwise would require a team.
- Master the art of co-creating with your AI companion.
- Break free from traditional programming constraints such as syntax and setup.
- Build confidently in multiple programming languages and frameworks you’ve never used before.
But this isn’t just about coding faster—it’s about fundamentally changing how we approach software development. The authors share practical strategies for implementing GenAI-powered development in real-world scenarios, from small projects to enterprise-scale applications, while maintaining the engineering excellence that modern systems demand.
Whether you’re a seasoned developer looking to stay ahead of the AI revolution, a technical leader guiding your team through this transformation, a former coder returning after a break, or someone just starting their career, this book provides the roadmap you need to thrive in the new era of software development.
Don’t get left behind in the biggest transformation our industry has seen since the internet revolution. Learn how to harness the power of vibe coding and unlock your full potential as a developer.
-
-
-
Hello there! We're excited to have you at the heart of all things web. Whether you're here to navigate the vast expanse of hyperlinks or to simply bask in some HTTP goodness, you're in the right place!
-
marimo stores notebooks as plaintext Python files (e.g. notebook.py) with the following properties:
- Git-friendly: small code change => small diff
- easy for both humans and computers to read
- importable as a Python module, without executing notebook cells
- executable as a Python script
- editable with a text editor
-
-
-
The OTTL Playground is a powerful and user-friendly tool designed to allow users to experiment with OTTL effortlessly. The playground provides a rich interface for users to create, modify, and test statements in real-time, making it easier to understand how different configurations impact the OTLP data transformation.
-
From a purely numbers aspect, the simple solution is the winner and the view model class is the loser but there are other factors to consider. This was the easiest example I could cobble together on a Sunday. Real world apps have complex scenarios that should be unit tested and are maintained by teams of people with varying skill levels.
The real answer to whether you should use
Binding(get:set:)
is to consider the trade offs of doing so. Run it through instruments and then consider whether the logic you’re introducing is easily testable and maintainable. -
I’ve been using Apple Shortcuts to invoke GitHub Actions workflows to create webpage bookmarks. It’s been great! (disclosure: I do work at GitHub)
My use case: I’ve been wanting to quit Pinboard.in, so I needed an alternative way to create and host my web bookmarks, some of which date back to ~2005 del.icio.us vintage. It’s been easy enough for me to export of all my bookmarks (
settings -> backup -> JSON
) and convert them to YAML files to be served by Jekyll and GitHub Pages. But I also needed an easy way to create new bookmarks that would work on all my Apple devices. I ended up with:- Bookmarks are organized as individual yaml files, in this blog’s repository.
- A Ruby script to take some simple inputs (url, title, notes), generate a new yaml file, and commit it to the repo using Octokit.
- A GitHub Actions workflow that accepts those same inputs and can be manually triggered, that runs the script. One thing to note is that I echo the inputs to
$GITHUB_STEP_SUMMARY
early in the workflow in case a later step errors, so I won’t lose the bookmark details and can go back later and manually fix it up. - An Apple Shortcut that asks for those inputs (either implicitly via the Share Sheet or via text inputs) and then manually triggers the GitHub Actions workflow via the GitHub API.
-
-
Siri LLama is apple shortcut that access locally running LLMs through Siri or the shortcut UI on any apple device connected to the same network of your host machine. It uses Langchain 🦜🔗 and supports open source models from both Ollama 🦙 or Fireworks AI 🎆.
-
-
The online version of Introduction to Quantum Information Science by Artur Ekert, Tim Hosgood, Alastair Kay, and Chiara Macchiavello.
-
As you may know, I use Nvim for a large part of my work on a daily basis. Recently, I have been involved in the development of several iOS applications, and found myself stuck with Xcode. However, Nvim can be a relatively acceptable alternative to work with. It’s lightweight, highly customizable, and works on multiple platforms. By integrating the
SourceKit-LSP
, you can unlock a lot of features like auto-completion, definitions, and diagnostics. I’ll try to give you a heads-up on how to set up Nvim to write Swift. -
I think we should avoid
Binding(get:set:)
in production code. In most cases, you will probably not see a big difference in performance, but it can come back to bite you. With some practice, bindings using key paths rather thanBinding(get:set:)
are just as easy to write and often simplify testing. -
In this article, we've explored the intricacies of the OpenTelemetry Transform Language, giving you the foundational knowledge to leverage its power for telemetry transformations.
While this overview should be enough to get you started, I encourage you to consult the official OTTL documentation for more detailed and up-to-date information.
If you'd like to follow the development of the language, ensure to check out the OpenTelemetry Contrib GitHub repository.
-
-
-
This is a cross-game modification system which randomizes different games, then uses the result to build a single unified multi-player game. Items from one game may be present in another, and you will need your fellow players to find items you need in their games to help you complete your own.
This project is the cumulative effort of many talented people. Together, they have spent countless hours creating a huge repository of source code which has turned our crazy idea into a reality.
-
- If you’re going to wear a mask, keep it on at all appropriate times! If you are captured on camera or witnessed at any point with your mask off, you can then be easily identified with it on.
- Be extremely conscientious about where and when you change into and out of your mask and anonymous clothing; there should be no cameras or hostile witnesses. If possible, explore the area in advance to find appropriate spaces for changing. Remember that police are especially likely to target masked individuals who are not in a crowd that is similarly dressed.
- Wear different outfits layered one upon the other, so you’ll be prepared for any eventuality. Ideally, you should have one outfit for getting to the site of the action without attracting attention, your anonymous gear for the action itself, and then another outfit underneath so you can look like a harmless civilian as you exit the area. Don’t forget to stay hydrated, particularly if all those clothes get hot.
- If you have tattoos that are or could be visible, cover them up! You can do this with makeup or concealer, especially if you use heavy-duty products designed for that purpose. Many actors and dancers use Dermablend to cover up tattoos, burns, and scars. It comes in numerous colors that can be mixed to match your skin tone, and it’s water resistant and rated for 12 hours of wear. It’s expensive, but cheaper than bail! If you can’t find Dermablend or a similar product, cover your tattoos with clothing that won’t ride up. Tuck your clothing in if you have to.
- Likewise, if you have visible piercings, take them out—or at least cover them up so they are sure not to be exposed.
- Do not march in a bloc wearing your regular clothing, especially if it’s distinctive. Cops may be stupid, but they can probably match the pictures of the masked-up person with the purple polka-dotted pants to pictures of the same person in the same outfit minus the mask—even if the pictures were taken on different days.
- If you are going to carry a backpack or bag, don’t take the one you carry around in everyday life. No matter how perfect your outfit is, it’s all for naught if your bag is recognizable—especially if, like many people, you change bags much less frequently than you change clothes.
- The same goes for your shoes, for similar reasons—wear different ones during the action than you wear every day. This is also important because cops can attempt to use footprints or other traces from shoes as evidence.
- Do not wear patches or other identifiable insignia on your clothing while in a bloc, unless everyone else has exactly the same ones in exactly the same places.
- Don’t just cover your face! Bandanas are popular and convenient, but they don’t conceal enough. Cover your head completely so your hair cannot be seen—especially if it’s distinctive. In a black bloc, you can do this by wearing a ski mask or making a mask out of a T-shirt—stretch the neck hole across your eyes and tie the sleeves behind your head, with the rest of the shirt covering your head and shoulders. In other circumstances, you could try a wig, if that fits the aesthetic of your action.
- If possible, cover your eyes. Goggles can do this while serving the dual purpose of protecting your eyes from chemical weapons; nondescript sunglasses could also work in a pinch. Both of these can be obtained in prescription form and are better to use than your regular glasses, particularly if your regular glasses are distinctive. Contact lenses are not recommended in situations where you may come into contact with chemical weapons.
- Be careful not to leave fingerprints and DNA evidence! Wear cloth gloves—leather and latex can retain fingerprints and even pass them on to objects you touch. Wipe down tools and other items with alcohol in advance, to clean fingerprints off them—you never know what might get lost in the chaos. Don’t forget about the batteries inside flashlights!
- Practice at home! Don’t go out in a bulky outfit you’ve never worn before expecting to pull off cop-shocking feats of dexterity. You need to be familiar with your outfit and comfortable moving in it; it’s important that your vision isn’t compromised, too.
- Do not let any of this give you a false sense of security. Be careful! Assess your relationship to risk honestly; don’t do anything if you’re not sure you could live with the worst possible consequences. Stay aware of your surroundings and listen to your instincts. Make sure you know and trust the people you’re working with, especially when it comes to high-risk activities. Practice proper security culture at all times. Know and assert your legal rights [PDF - .9 MB], especially in stressful situations. Doing so may not make things better, but failing to do so will certainly make them worse!
-
The Transform Processor modifies telemetry based on configuration using the OpenTelemetry Transformation Language (OTTL).
For each signal type, the processor takes a list of statements and executes them against the incoming telemetry, following the order specified in the configuration. Each statement can access and transform telemetry using functions, and allows the use of a condition to help decide whether the function should be executed.
-
This program generates a custom OpenTelemetry Collector binary based on a given configuration.
-
Although formal methods are capable of producing reliable software, they have seen minimal adoption in everyday programming. Automatic code generation using large language models is becoming increasingly widespread, but it rarely considers producing strong correctness guarantees. In this study, we explore the ability of LLMs to produce verified code in three verification languages (Dafny, Nagini, and Verus). To do so, we use manually curated datasets derived from the state-ofthe-art Python benchmark, HumanEval. We also assess what types of information are sufficient to achieve good-quality results.
-
The OpenTelemetry Transformation Language (OTTL) is a small, domain-specific programming language intended to process data with OpenTelemetry-native concepts and constructs.
This package implements everything necessary to use OTTL in a Collector component or in another user-facing system.
-
The Transform Processor modifies telemetry based on configuration using the OpenTelemetry Transformation Language (OTTL).
For each signal type, the processor takes a list of statements and executes them against the incoming telemetry, following the order specified in the configuration. Each statement can access and transform telemetry using functions, and allows the use of a condition to help decide whether the function should be executed.
-
If you’re an experienced engineer this is likely obvious to you already, so I’m writing this section for people who are just getting started building software.
- Projects should be low stakes. Think about how much harm the code you are writing could cause if it has bugs or security vulnerabilities. Could somebody be harmed—damaged reputation, lost money or something worse? This is particularly important if you plan to build software that will be used by other people!
- Consider security. This is a really difficult one—security is a huge topic. Some high level notes:
- Watch out for secrets—anything that looks similar in shape to a password, such as the API key used to access an online tool. If your code involves secrets you need to take care not to accidentally expose them, which means you need to understand how the code works!
- Think about data privacy. If you are building a tool that has access to private data—anything you wouldn’t want to display to the world in a screen-sharing session—approach with caution. It’s possible to vibe code personal tools that you paste private information into but you need to be very sure you understand if there are ways that data might leave your machine.
- Be a good network citizen. Anything that makes requests out to other platforms could increase the load (and hence the cost) on those services. This is a reason I like Claude Artifacts—their sandbox prevents accidents from causing harm elsewhere.
- Is your money on the line? I’ve seen horror stories about people who vibe coded a feature against some API without a billing limit and racked up thousands of dollars in charges. Be very careful about using vibe coding against anything that’s charged based on usage.
-
In the face of continued exploitation by advanced threat actors, Apple’s implementation of Exclaves represents a large investment to add extra defence in depth to their operating systems. By isolating sensitive resources, Apple is shrinking their potential attack surface and reducing the impact of any single kernel compromise. Defending monolithic kernels is a Sisyphean task, and exclaves represent one method of dealing with the challenge — is it the right direction for the long term, or a temporary step? In my dreams, I imagine a future redesign using CHERI and a production implementation of ARM Morello 😊 Regardless, it’s a defensive effort on a larger scale than any other end user device manufacturer is currently attempting.
Critically, this article has not directly examined what is being moved from the kernel into exclaves. Build images indicate they are being used for secure camera/microphone indicators, some Apple Neural Engine functionality, some device drivers, components that talk to the Secure Enclave and so on. There may be many components that will benefit from future migration to exclaves and the overall effectiveness of exclaves may depend on an ongoing effort to maximise their usage. Everything XNU outside of exclaves will still be fair game.
I also suspect that exclaves may be used within Apple’s Private Cloud Compute infrastructure for cloud-based AI to provide a higher assurance of privacy in the face of external threats.
-
I think the behavior of Group (or to be more precise: applying modifiers to lists of views) is just too unreliable to use in production. Why does it differ between the Simulator and previews? Why does onAppear on a list get called once, but the background gets applied to each item?
For me, I’m avoiding Group where possible, and always choose for “stable containers” such as a stack (VStack and ZStack are my favorite, for some strange reason, HStack feels wrong).
-
-
-
I figured out a minimal pattern for building a completely custom website using GitHub Actions and deploying the result to GitHub Pages.
First you need to enable GitHub Pages for the repository. Navigate to Settings -> Pages (or visit
$repo/settings/pages
) and set the build source to "GitHub Actions".Here's my minimal YAML recipe - save this in a .github/workflows/publish.yml file:
name: Publish site on: push: workflow_dispatch: permissions: pages: write id-token: write jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Build the site run: | mkdir _site echo '<h1>Hello, world!</h1>' > _site/index.html - name: Upload artifact uses: actions/upload-pages-artifact@v3 deploy: environment: name: github-pages url: ${{ steps.deployment.outputs.page_url }} runs-on: ubuntu-latest needs: build steps: - name: Deploy to GitHub Pages id: deployment uses: actions/deploy-pages@v4
-
What would the engineer say, after you had explained your problem, and enumerated all of the dissatisfactions in your life? He would probably tell you that life is a very hard and complicated thing; that no interface can change that; that anyone who believes otherwise is a sucker; and that if you don't like having choices made for you, you should start making your own.
-
Given a version number PROUD.DEFAULT.SHAME, increment the:
- PROUD version when you make changes you are really proud of
- DEFAULT version when you make a release that's okay
- SHAME version when you are fixing things that are too embarrassing to admit
Additional labels for pre-release and build metadata are available as extensions to the PROUD.DEFAULT.SHAME format.
-
The Trusted Types API’ gives web developers a way to ensure that input has been passed through a user-specified transformation function before being passed to an API that might execute that input. This can help to protect against client-side cross-site scripting (XSS) attacks. Most commonly the transformation function sanitizes the input.
Client-side, or DOM-based, XSS attacks happen when data crafted by an attacker is passed to a browser API that executes that data as code. These APIs are known as injection sinks.
The Trusted Types API distinguishes three sorts of injection sinks:
- HTML sinks: APIs that interpret their input as HTML, such as
Element.innerHTML
ordocument.write()
. These APIs could execute JavaScript if it is embedded in the HTML, for example in<script>
tags or event handler attributes. - JavaScript sinks: APIs that interpret their input as JavaScript, such as
eval()
orHTMLScriptElement.text
. - JavaScript URL sinks: APIs that interpret their input as the URL of a script, such as
HTMLScriptElement.src
.
- HTML sinks: APIs that interpret their input as HTML, such as
-
Any number of organizational configurations involving engineers, product managers, and designers can produce successful products. Over the past three decades, my observation is that the companies that have let the builders — the engineers and the designers — own significant parts of the role of Product Manager produce stronger products.
You want the humans building the product to have an equal voice in product decisions. You want the hard decisions to painfully earn their resolution through long hours of informed humans staring at the problem from every angle. You want humans who build.
-
A vtable is a table of function pointers attached to a class instance. The vtable contains an entry for every overridable method:
class Fruit { func eat() {} func squeeze() {} } class Apple: Fruit { override func eat() {} }
The vtable for Fruit has two entries. The vtable for Apple replaces the second entry with its own implementation of eat(). When you call eat() on an instance of Fruit, we compile the call by loading the vtable entry and performing an indirect jump.
A witness table is the same thing, but for a protocol conformance. Protocol conformances can be defined independently of types, so we can make Int conform to a new protocol for instance:
protocol P { func foo() } extension Int: P { func foo() {} }
This will generate a global symbol that stores the witness table for “Int: P”, which contains one entry, the implementation of foo().
Now if I declare a generic function and call it with an Int:
func g<T: P>(_ t: T) { t.foo() } g(123)
Then we compile the call to g() by passing in a reference to the “Int: P” witness table. Inside the function, the call to t.foo() loads the right function pointer from the witness table and performs an indirect call.
-
-
-
When a state change occurs in SwiftUI, the framework reconstructs the entire view hierarchy. This might sound inefficient at first, but it's actually remarkably optimized, because SwiftUI views are lightweight value types (structs) and most importantly, SwiftUI uses structural identity to detect which views remain unchanged and skips redrawing them, re-rendering only the views impacted by the state change.
Structural identity is SwiftUI's way of recognizing whether a view before and after a state change is fundamentally the same view. When SwiftUI identifies views as structurally identical, it does not rerender them.
A view's structural identity is determined by:
- Its type
- Its position in the view hierarchy
- The identity of its ancestors
-
Claude can use an Anthropic-defined text editor tool to view and modify text files, helping you debug, fix, and improve your code or other text documents. This allows Claude to directly interact with your files, providing hands-on assistance rather than just suggesting changes.
Some examples of when to use the text editor tool are:
- Code debugging: Have Claude identify and fix bugs in your code, from syntax errors to logic issues.
- Code refactoring: Let Claude improve your code structure, readability, and performance through targeted edits.
- Documentation generation: Ask Claude to add docstrings, comments, or README files to your codebase.
- Test creation: Have Claude create unit tests for your code based on its understanding of the implementation.
-
Coming soon to Silicon Valley, the mission of the Museum of Technical and Advanced Computing (MOTAAC) will be to preserve and share an often-overlooked area of computing history: The computers that changed the world behind the scenes. From laboratory automation and process control to engineering and design workstations and supercomputers, it wasn't just the computers available to end users that brought us to where we are today.
-
A specification for configuring all attributes of a render task's destination and issuing asynchronous render tasks.
The
CIRenderDestination
class provides an API for specifying a render task destination's properties, such as buffer format, alpha mode, clamping behavior, blending, and color space, properties formerly tied toCIContext
.You can create a
CIRenderDestination
object for each surface or buffer to which you must render. You can also render multiple times to a single destination with different settings such as colorspace and blend mode by mutating a singleCIRenderDestination
object between renders.Renders issued to a
CIRenderDestination
return to the caller as soon as the CPU has issued the task, rather than after the GPU has performed the task, so you can start render tasks on subsequent frames without waiting for previous renders to finish. If the render fails, aCIRenderTask
will return immediately. -
To access the list of Keyboard shortcuts, go to your profile picture , and select Keyboard Shortcuts ⌨. You can also enter SHIFT+? on your keyboard. When you mouse over certain player buttons, you’ll see the relevant keyboard shortcut. For example, when you mouse over the full screen icon, you'll see 'Full screen (f),' indicating you can enter f to open full screen.
Keyboard shortcut Function Spacebar Play/Pause when the seek bar is selected. Activate a button if a button has focus. Play/Pause Media Key on keyboards Play / Pause. k Pause/Play in player. m Mute/unmute the video. Stop Media Key on keyboards Stop. Next Track Media Key on keyboards Moves to the next track in a playlist. Left/Right arrow on the seek bar Seek backward/forward 5 seconds. j Seek backward 10 seconds in player. l Seek forward 10 seconds in player. . While the video is paused, skip to the next frame. , While the video is paused, go back to the previous frame. > Speed up the video playback rate. < Slow down the video playback rate. Home/End on the seek bar Seek to the beginning/last seconds of the video. Up/Down arrow on the seek bar Increase/Decrease volume 5%. Numbers 1 to 9 Seek to the 10% to 90% of the video. Number 0 Seek to the beginning of the video. / Go to search box. f Activate full screen. If full screen mode is enabled, activate F again or press escape to exit full screen mode. c Activate closed captions and subtitles if available. To hide captions and subtitles, activate C again. Shift+N Move to the next video (If you're using a playlist, will go to the next video of the playlist. If not using a playlist, it will move to the next YouTube suggested video). Shift+P Move to the previous video. Note that this shortcut only works when you're using a playlist. i Open the Miniplayer. -
In this article, I will outline how to render high dynamic range (HDR) video with Metal. In contrast to rendering standard dynamic range (SDR) content, where we can sometimes get away without paying too much attention to color management, there are many important subtleties to rendering HDR colors accurately.
A lot of the heavy lifting will be done by AVFoundation, which handles video file format decoding and playback. We will also look at lower-level APIs in Core Video and Core Animation that make it easier to work with video content when rendering with Metal.
Our chief aim is to build a simple HDR video player, but the concepts we discuss and their implementation are applicable in any context where you need to ingest HDR content in Metal and render it with your own imaging pipeline or engine.
You can find the sample code for this article here.
-
-
The Responses API and Chat Completions API are two different ways to interact with OpenAI's models. This guide explains the key differences between the two APIs.
The Responses API is our newest core API and an agentic API primitive, combining the simplicity of Chat Completions with the ability to do more agentic tasks. As model capabilities evolve, the Responses API is a flexible foundation for building action-oriented applications, with built-in tools:
-
The OpenAI Agents SDK enables you to build agentic AI apps in a lightweight, easy to use package with very few abstractions. It's a production-ready upgrade of our previous experimentation for agents, Swarm. The Agents SDK has a very small set of primitives:
- Agents, which are LLMs equipped with instructions and tools
- Handoffs, which allow agents to delegate to other agents for specific tasks
- Guardrails, which enable the inputs to agents to be validated
In combination with Python, these primitives are powerful enough to express complex relationships between tools and agents, and allow you to build real world applications without a steep learning curve. In addition, the SDK comes with built-in tracing that lets you visualize and debug your agentic flows, as well as evaluate them and even fine-tune models for your application.
-
by marimo
marimo-blocks is a React component library that lets you embed Python notebooks in your web applications. It uses Pyodide to run Python code directly in the browser.
-
715-999-7483 is a phone-powered multiplayer website builder. By calling the phone number, anyone at any time can update the homepage by describing the changes they'd like to make to it. What happens when you give the public the power to change one central website? Will they use the power for good, for stupidity, and will they wait on hold to use it?
-
A delightful Ruby way to work with AI through a unified interface to OpenAI, Anthropic, Google, and DeepSeek.
Every AI provider comes with its own client library, its own response format, its own conventions for streaming, and its own way of handling errors. Want to use multiple providers? Prepare to juggle incompatible APIs and bloated dependencies.
RubyLLM fixes all that. One beautiful API for everything. One consistent format. Minimal dependencies — just Faraday and Zeitwerk. Because working with AI should be a joy, not a chore.
-
This issue has been reported in the developer forums. Apparently, if a Swift package includes a
.swiftpm/
directory with.xcscheme
files for its own project development, then Xcode will automatically detect and display these schemes in the dropdown list of schemes for your project. This is rather undesirable. It’s an especially frustrating experience because even if you delete them from the list in Xcode, they will eventually reappear when packages get updated or refreshed. For example, you’ll experience this issue with the popular library, CocoaLumberjack, which includes schemes using a.swiftpm/
directory here.The solution to preventing these package schemes from appearing in Xcode automatically is for package authors to switch to using an
.xcworkspace
file for their schemes, rather than a.swiftpm/
directory. Here’s an example from the sideeffect.io/AsyncExtensions package. -
The Agent Communication Protocol (ACP) is a protocol designed to standardize how agents communicate, enabling automation, agent-to-agent collaboration, UI integration, and developer tooling.
Rather than imposing strict specifications immediately, ACP emphasizes practical, useful features first. Standardization occurs once features demonstrate value, ensuring broader adoption and long-term compatibility.
-
Today's autoregressive LLMs chain enterprises to an unsustainable paradigm - sequential token generation that forces brutal tradeoffs between quality, speed, and cost. While frontier models compensate with massive compute (1000+ token "chain-of-thought" sequences), this approach inflates inference costs by 40x for complex tasks. Mercury's diffusion architecture breaks this trilemma.
-
- Help the user configure a Wi-Fi accessory
- Require a connection to run over a specific interface
- Listen for incoming connections
- Networking is hard in general.
- Apple devices support very dynamic networking, and your app has to work well in whatever environment it’s running in.
- Documentation for the APIs you need is tucked away in man pages and doc comments.
- In many cases you have to assemble these APIs in creative ways.
- The iOS Wi-Fi Lifecycle describes how iOS joins and leaves Wi-Fi networks. Understanding this is especially important if you’re building an app that works with a Wi-Fi accessory.
- Network Interface Concepts explains how Apple platforms manage network interfaces. If you’ve got this far, you definitely want to read this.
- Network Interface Techniques offers a high-level overview of some of the more common techniques you need when working with network interfaces.
- Network Interface APIs describes APIs and core techniques for working with network interfaces. It’s referenced by many other posts.
- Running an HTTP Request over WWAN explains why most apps should not force an HTTP request to run over WWAN, what they should do instead, and what to do if you really need that behaviour.
- If you’re building an iOS app with an embedded network server, see Showing Connection Information in an iOS Server for details on how to get the information to show to your user so they can connect to your server.
- Many folks run into trouble when they try to find the device’s IP address, or other seemingly simple things, like the name of the Wi-Fi interface. Don’t Try to Get the Device’s IP Address explains why these problems are hard, and offers alternative approaches that function correctly in all network environments.
- If you’re working with broadcasts or multicasts, see Broadcasts and Multicasts, Hints and Tips.
- If you’re building an app that works with a Wi-Fi accessory, see Working with a Wi-Fi Accessory.
- If you’re trying to gather network interface statistics, see Network Interface Statistics.
There are also some posts that are not part of this series but likely to be of interest if you’re working in this space:
- TN3179 Understanding local network privacy discusses the local network privacy feature.
- Calling BSD Sockets from Swift does what it says on the tin, that is, explains how to call BSD Sockets from Swift. When doing weird things with the network, you often find yourself having to use BSD Sockets, and that API is not easy to call from Swift. The code therein is primarily for the benefit of test projects, oh, and DevForums posts like these.
- TN3111 iOS Wi-Fi API overview is a critical resource if you’re doing Wi-Fi specific stuff on iOS.
- TLS For Accessory Developers tackles the tricky topic of how to communicate securely with a network-based accessory.
- Networking Resources has links to many other useful resources.
-
Agile software development and Formal Methods are traditionally seen as being in conflict. From an Agile perspective, there is pressure to deliver quickly, building vertical prototypes and doing many iterations /sprints, refining the requirements; from a Formal Methods perspective, there is pressure to deliver correctly and any change in requirements often necessitates changes in the formal specification and might even impact all arguments of correctness.
Over the years, the need to "be agile" has become a kind of mantra in software development management, and there is a prevalent prejudice that using formal methods was an impediment to being agile. In this paper, we contribute to the refutation of this stereotype, by providing a real-world example of using good practices from formal methods and agile software engineering to deliver software that is simultaneously reliable, effective, testable, and that can also be iterated and delivered rapidly. We thus present how a lightweight software engineering methodology, drawing from appropriate formal methods techniques and providing the benefits of agile software development, can look like. Our methodology is informed and motivated by practical experience. We have devised and adapted it in the light of experience in delivering a large-scale software system that needs to meet complex real-world requirements: the Cardano blockchain and its cryptocurrency ada.
The cryptocurrency domain is a rather new application area for which no clear engineering habit exists, so it is fitting well for agile methods. At the same time, there is a lot of real monetary value at stake, making it a good fit for using formal methods to ensure high quality and correctness. This paper reports on the issues that have been faced and overcome, and provides a number of real-world lessons that can be used to leverage the benefits of both agile and formal methods in other situations.
-
We combine dependent types with linear type systems that soundly and completely capture polynomial time computation. We explore two systems for capturing polynomial time: one system that disallows construction of iterable data, and one, based on the LFPL system of Martin Hofmann, that controls construction via a payment method. Both of these are extended to full dependent types via Quantitative Type Theory, allowing for arbitrary computation in types alongside guaranteed polynomial time computation in terms. We prove the soundness of the systems using a realisability technique due to Dal Lago and Hofmann.
Our long-term goal is to combine the extensional reasoning of type theory with intensional reasoning about the resources intrinsically consumed by programs. This paper is a step along this path, which we hope will lead both to practical systems for reasoning about programs' resource usage, and to theoretical use as a form of synthetic computational complexity theory.
-
It is easy to implement local conftest plugins for your own project or pip-installable plugins that can be used throughout many projects, including third party projects. Please refer to How to install and use plugins if you only want to use but not write plugins.
A plugin contains one or multiple hook functions. Writing hooks explains the basics and details of how you can write a hook function yourself.
pytest
implements all aspects of configuration, collection, running and reporting by calling well specified hooks of the following plugins:- builtin plugins: loaded from pytest’s internal
_pytest
directory. - external plugins: installed third-party modules discovered through entry points in their packaging metadata
- conftest.py plugins: modules auto-discovered in test directories
In principle, each hook call is a
1:N
Python function call whereN
is the number of registered implementation functions for a given specification. All specifications and implementations follow thepytest_
prefix naming convention, making them easy to distinguish and find. - builtin plugins: loaded from pytest’s internal
-
Below is an automated compilation of
pytest
plugins available on PyPI. It includes PyPI projects whose names begin withpytest-
orpytest_
and a handful of manually selected projects. Packages classified as inactive are excluded. -
-
-
The Raspberry Pi is a series of single-board computers that became very popular in the last few years. Due to its small size, low cost and low energy consumption, it can be used in a wide range of applications: home automation, media center, or even business applications. The running operating system is the Linux-based Raspberry Pi OS, and this makes it possible to run Swift on it - scripts or even applications, such as servers.
This post will first give some tips on hot to setup a Raspberry Pi, and then cover the two ways of running a Swift app on it: building directly on the Raspberry Pi, and using Swift 6’s new cross-compilation feature (which allows the compilation of Swift code on a Mac) to build a Vapor application. No Docker required!
-
Fly through your API workflow with an approachable yet powerful keyboard-centric interface. Run it locally or over SSH on remote machines and containers. Save your requests in a readable and version-control friendly format.
-
structx
is a powerful Python library that extracts structured data from text using Large Language Models (LLMs). It dynamically generates type-safe data models and provides consistent, structured extraction with support for complex nested data structures.Whether you're analyzing incident reports, processing documents, or extracting metrics from unstructured text,
structx
provides a simple, consistent interface with powerful capabilities. -
A family of technologies empowering developers to use their existing web skills to create truly native UIs for both mobile and web from a single codebase. Designed for diverse use cases and rich interactivity, Lynx delivers vibrant and engaging UIs for large-scale apps like TikTok, featuring a speedy, versatile rendering engine, performance-driven dual-threaded UI programming, modern Rust-based tooling, and more!
-
Integrating large language models (LLMs) like ChatGPT into computer science education offers transformative potential for complex courses such as data structures and algorithms (DSA). This study examines ChatGPT as a supplementary tool for teaching assistants (TAs), guided by structured prompts and human oversight, to enhance instruction and student outcomes. A controlled experiment compared traditional TA-led instruction with a hybrid approach where TAs used ChatGPT-4o and ChatGPT o1 to generate exercises, clarify concepts, and provide feedback. Structured prompts emphasized problem decomposition, real-world context, and code examples, enabling tailored support while mitigating over-reliance on AI. Results demonstrated the hybrid approach's efficacy, with students in the ChatGPT-assisted group scoring 16.50 points higher on average and excelling in advanced topics. However, ChatGPT's limitations necessitated TA verification. This framework highlights the dual role of LLMs: augmenting TA efficiency while ensuring accuracy through human oversight, offering a scalable solution for human-AI collaboration in education.
-
A cylinder sits in a room. It is impassive, smooth, simple and small. It stands 14.8cm high, with a single blue-green circular light that traces around its upper rim. It is silently attending. A woman walks into the room, carrying a sleeping child in her arms, and she addresses the cylinder.
‘Alexa, turn on the hall lights’
The cylinder springs into life. ‘OK.’ The room lights up. The woman makes a faint nodding gesture, and carries the child upstairs.
This is an interaction with Amazon’s Echo device.3 A brief command and a response is the most common form of engagement with this consumer voice-enabled AI device. But in this fleeting moment of interaction, a vast matrix of capacities is invoked: interlaced chains of resource extraction, human labor and algorithmic processing across networks of mining, logistics, distribution, prediction and optimization. The scale of this system is almost beyond human imagining. How can we begin to see it, to grasp its immensity and complexity as a connected form? We start with an outline: an exploded view of a planetary system across three stages of birth, life and death, accompanied by an essay in 21 parts. Together, this becomes an anatomical map of a single AI system.
-
-
Neumorphism is a new take on skeuomorphic design. Even though it relates to skeuomorphism, there is a new focus in the entire UI design style with neumorphism. This focus is not necessarily on the contrast or similarity between the real and digital worlds, but rather the color palette.
Yes, you read that right. Neumorphism is all about the color of the entire screen, and delivering an entirely unique experience for users
Enter neumorphism: the design world’s answer to “what if flat design had a touch of class?” It’s the subtle rebellion against the stark simplicity of flat design, a whisper of dimension in a world of right angles.
Neumorphism UI takes the core tenets of flat design—clean lines, minimalist aesthetic, and an emphasis on function—and infuses them with a hint of playful depth. Imagine flat design elements gently carved into the background, or softly extruded from it, all achieved through the magic of shadows and highlights. The effect is subtle, never garish, a mere suggestion of three-dimensionality that adds a touch of intrigue without sacrificing clarity.
- Depth and shadows Neumorphism is all about subtle contrast and solid colors. But how can we create an interface that delivers a wow-factor without any flashy elements? The answer lurks in the shadows. It’s not just a single, flat shadow — it’s a dance between inner and outer shadows, creating the illusion of elements being gently “pushed” in and “pulled” out from the background.
- Color and gradients You’ll want to ensure your background and components’ color works well in solid form, as you’ll need to apply this same color all around the UI design. For the shadow game to work, your background can’t be fully black or plain white.
- Rounded corners Think of a cloud — fluffy, soft, and inviting. That’s the feeling neumorphism strives for, and rounded corners are the key. They soften the edges of elements, creating a seamless transition between the element and the background. It’s like gently carving shapes into the canvas, maintaining a sense of unity and connection.
These are interesting times indeed, and neumorphism reflects that fact perfectly. It was born out of skeuomorphism and minimalism, but aims to deliver an experience users have never been through.
Will we see more of this style in upcoming products? Is this the new Material Design? The truth is that neumorphism comes with a set of flaws that represent a real problem. As it is now, the usability issues it brings about are too great for any product to risk it.
-
During my investigation of slow builds, I noticed some other frequent Xcode connections. For example, Xcode connects to
devimages-cdn.apple.com
every time it launches. According to Apple's support document Use Apple products on enterprise networks, that domain is used for "Xcode downloadable components". I assume this refers to platform support in the Components pane of Xcode Settings. (Note that the document doesn't mentiondeveloperservices2.apple.com
.) Again, though, it's unnecessary to check for updates on every launch. I'd rather not tell Apple whenever I launch Xcode, or whenever I make a local build of my app. It certainly doesn't align with Apple's claim that they believe privacy is a fundamental human right. Or perhaps Apple believes that developers are subhuman…I've saved the worst for last. For some reason, Xcode phones home to
appstoreconnect.apple.com
every time I open an Xcode project. This also appears to be unnecessary, and I experience no problems after denying the connections in Little Snitch, so I do! I assume that the connections send identifying information about the Xcode project to Apple, otherwise why even make the connections when opening a project? And all of these connections from Xcode, to every domain, require login to your Apple Developer account, so Apple is definitely receiving identifying information about you in any case.In effect, Xcode is a developer analytics collection mechanism, whether you like it or not, which I don't.
-
I think the way to approach this is purely in terms of synchronous vs asynchronous execution. If you are writing a synchronous function that could be slow, think about making it non-isolated. You will, of course, need to pass arguments in and get results back out. That may require
Sendable
types orsending
. But this is always the case when moving data around across isolation.If you are writing an asynchronous function, just focus on getting your problem solved. You might find a synchronous bottleneck, but you can address that without breaking your API contract. Don’t stress out about the performance of calls you make with
await
.But if you happen to encounter the situation where you are a) calling an async function b) with the same isolation and c) that is then hitting a synchronous bottleneck, you have yourself a deeper issue. You almost certainly need to make some isolation changes. And if you aren’t in control of that function I’d like you to tell about what you did, because I’m very interested!
-
For over 11 years, 18F has been proudly serving you to make government technology work better. We are non-partisan civil servants. 18F has worked on hundreds of projects, all designed to make government technology not just efficient but effective, and to save money for American taxpayers.
However, all employees at 18F – a group that the Trump Administration GSA Technology Transformation Services Director called "the gold standard" of civic tech – were terminated today at midnight ET.
-
Mac apps often need to handle large datasets efficiently, but SwiftUI’s standard
List
can struggle with performance on macOS as the number of items grows. Scrolling may become sluggish, and memory usage can increase significantly.For example, an app that enumerates files in a folder can easily generate a list with over 10,000 rows. While
List
is the obvious choice, its performance degrades at scale. A common alternative is wrapping aLazyHStack
in aScrollView
, but this approach also struggles with large datasets.So what’s the solution? We can build a custom layout that aggressively recycles rows, repositioning them just in time as the user scrolls while reusing the same view identity as a row fragment. This works particularly well with a fixed row height, which is common in macOS applications, since it allows us to determine visible rows based on the scroll offset. While this solution was designed for macOS, the same technique can also be applied to iOS.
Our custom approach is faster because we reuse a limited set of view identities instead of creating a new view for every row in a large list. We achieve this by calculating a fragment ID, which is the row’s index modulo the maximum number of visible rows. This allows SwiftUI to recycle view identities as rows move off-screen, rather than instantiating new views each time. By reusing existing views, we significantly improve performance and reduce memory usage.
In contrast, built-in components like
List
andLazyHStack
cannot reuse views as aggressively. They assign each row a unique identity (provided in theForEach
statement) to properly maintain per-row state. As a result, SwiftUI’s view graph must create a separate leaf for each row and compute its height individually—an expensive process as the number of rows grows.The performance difference becomes even more pronounced when using AppKit-backed controls such as text views, sliders, or buttons. By reusing view identities, previously instantiated AppKit views remain attached and are efficiently recycled, much like how a native
NSTableView
optimizes performance at scale. -
Large Language Models (LLMs) have been successful in mathematical reasoning tasks such as formal theorem proving when integrated with interactive proof assistants like Lean. Existing approaches involve training or fine-tuning an LLM on a specific dataset to perform well on particular domains, such as undergraduate-level mathematics. These methods struggle with generalizability to advanced mathematics. A fundamental limitation is that these approaches operate on static domains, failing to capture how mathematicians often work across multiple domains and projects simultaneously or cyclically. We present LeanAgent, a novel lifelong learning framework for formal theorem proving that continuously generalizes to and improves on ever-expanding mathematical knowledge without forgetting previously learned knowledge. LeanAgent introduces several key innovations, including a curriculum learning strategy that optimizes the learning trajectory in terms of mathematical difficulty, a dynamic database for efficient management of evolving mathematical knowledge, and progressive training to balance stability and plasticity. LeanAgent successfully proves 155 theorems previously unproved formally by humans across 23 diverse Lean repositories, many from advanced mathematics. It performs significantly better than the static LLM baseline, proving challenging theorems in domains like abstract algebra and algebraic topology while showcasing a clear progression of learning from basic concepts to advanced topics. In addition, we analyze LeanAgent's superior performance on key lifelong learning metrics. LeanAgent achieves exceptional scores in stability and backward transfer, where learning new tasks improves performance on previously learned tasks. This emphasizes LeanAgent's continuous generalizability and improvement, explaining its superior theorem-proving performance.
-
This directory contains the pieces of the Swift runtime libraries.
-
Connect your GitHub repositories directly to Claude to provide comprehensive context for your software development tasks. You can easily add repositories by selecting them from a list, helping Claude better understand and assist with your codebase.
-
A surprisingly common complaint I see from developers who have tried using LLMs for code is that they encountered a hallucination—usually the LLM inventing a method or even a full software library that doesn’t exist—and it crashed their confidence in LLMs as a tool for writing code. How could anyone productively use these things if they invent methods that don’t exist?
Hallucinations in code are the least harmful hallucinations you can encounter from a model.
-
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster through natural language commands. By integrating directly with your development environment, Claude Code streamlines your workflow without requiring additional servers or complex setup.
Claude Code’s key capabilities include:
- Editing files and fixing bugs across your codebase
- Answering questions about your code’s architecture and logic
- Executing and fixing tests, linting, and other commands
- Searching through git history, resolving merge conflicts, and creating commits and PRs
-
A new wide-spectrum content blocker for Safari designed to be performant, efficient, and effective.
-
Trusted by millions of game developers, game studios, 3D printing enthusiasts, and XR creators worldwide to bring their visions to life, Meshy is the leading Al 3D model generator for creating 3D models and animations in seconds.
-
We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment. We call this emergent misalignment. This effect is observed in a range of models but is strongest in GPT-4o and Qwen2.5-Coder-32B-Instruct. Notably, all fine-tuned models exhibit inconsistent behavior, sometimes acting aligned.
Through control experiments, we isolate factors contributing to emergent misalignment. Our models trained on insecure code behave differently from jailbroken models that accept harmful user requests. Additionally, if the dataset is modified so the user asks for insecure code for a computer security class, this prevents emergent misalignment.
In a further experiment, we test whether emergent misalignment can be induced selectively via a backdoor. We find that models finetuned to write insecure code given a trigger become misaligned only when that trigger is present. So the misalignment is hidden without knowledge of the trigger. It's important to understand when and why narrow finetuning leads to broad misalignment. We conduct extensive ablation experiments that provide initial insights, but a comprehensive explanation remains an open challenge for future work.
-
Terminal Trove curates and showcases all things in the terminal such as command line interface tools (CLI), text mode interface tools (TUI), developer tools and more no matter what platform or medium.
-
Go to accountscenter.facebook.com and complete the steps below.
- Click “Ad preferences.”
- Click “Manage info.”
- Click “Activity information from ad partners.”
- Click “Review setting.”
- Select “No, don’t make my ads more relevant by using this information.”
- Click “Confirm.”
- Click “Ad preferences.”
- Click “Manage info.”
- Click “Ads from ad partners.”
- Select “Don’t show me ads from ad partners.”
- Click the “X” button to close out.
- Click “Your information and permissions.”
- Click “Your activity off Meta technologies.”
- Click “Manage future activity.”
- Select “Disconnect future activity.”
- Click “Continue.”
- Click “Disconnect future activity.”
-
Forget best practices, this talk celebrates Swift programming patterns that break the mould. Should you use them everywhere? Heavens no. Should you occasionally colour outside the lines to give your project superpowers? Yes please!
-
A powerful digital video stick for bold audio visual adventures, with dual RP2040 chips and a conveniently HDMI-shaped output connector to boot!
Use PicoVision to make and run your own homebrew games, draw digital art, recreate beloved demos, screensavers or WinAmp visualisations, visualise data, subvert advertising billboards, emulate CeeFax or whip up some last minute signage for your cyber night market.
We managed to cram a lot into this little thing...
- 🖼️ GPU (RP2040) Does all the heavy-lifting to display buttery-smooth, high-res, animations on your TV or monitor via HDMI.
- ⚙️ CPU (Pico W) Runs your code and provides an interface to other gadgets through USB, Wi-Fi, and Bluetooth!
- 🖥 HDMI connector Make use of TVs, monitors, giant projectors, or even tiny displays for building into a cosplay outfit.
- 🔊 Line out audio Bash out some bleeps and bloops! This digital audio interface can produce some quality noise.
- 💾 microSD card Never run out of space for your lovely assets by adding a sizeable microSD card to your setup.
- 🌡️ Qw/ST connector Add sensors or other types of breakout to your project so they can react to the world around them.
- 🔘 On-board reset and user buttons jCreate a simple user interface for your project without needing to add any extras.
-
-
-
-
-
-
PostScript is a compact, simple, and stack-based interpreted language. It is the precursor to PDF, yet it is significantly more powerful. As a Turing-complete language, it can theoretically compute anything that another Turing-complete language can.
Undoubtedly, PostScript is considered outdated. It's not intended to be used directly by humans, but to be machine-generated and interpreted on printers. There are no real Integrated Development Environments (IDEs) for it, nor are there any substantial debuggers. PostScript lacks a standard method for checking the argument types and return values of procedures. No standard libraries are available. Moreover, the PostScript language has been released in three official versions, and interpreters may also include specific instructions.
Despite these drawbacks, PostScript is stunningly fun to work with. Indeed, engineering is fundamentally about building things up within constraints, devising relevant guidelines and conventions. The lack of complicated language constructs, or huge libs and framework to master, in combination with its vintage charm, simplicity and powerful set of primitives make PostScript an ideal candidate for software engineering pet projects such as PSChess.
This page compiles some of the aspects and techniques I employ when writing PostScript for enjoyment. It is by no means comprehensive and there may be areas for correction or improvement. If you are someone who still enjoys manually coding in PostScript, I would greatly appreciate your feedback.
-
-
-
However, Apple’s documentation doesn’t link to API documentation for this
observe
method. As far as I can tell, none exists. It’s not documented anywhere on the main NSObject definition, nor in the KeyValueObserving protocol definition.This is the key to this technique: the method taking a KeyPath must be defined in a protocol extension, which allows
Self
to refer to the static type of the instance at the time the method is called. -
-
-
You can find the full implementation of the
define-watch-rpcs
macro and its associated codegen procedures in this gist.Now that Swift also has macros in the language, you could probably write a DSL like this directly in Swift, but I just used what I know, and Swift macros look somewhat clunky compared to what Racket offers.
-
This post explores the deep connections between functional programming, lambda calculus, and category theory, with a particular focus on composability, a foundational principle in both mathematics and software engineering. Haskell, a functional programming language deeply rooted in these mathematical frameworks, serves as the practical implementation of these concepts, demonstrating how abstract theories can be applied to build robust, scalable, and maintainable software systems. We present key concepts such as function composition, functors, monads, and cartesian closed categories, illustrating their significance in modern software development. Additionally, it highlights how formal composability, grounded in lambda calculus and category theory, is crucial for managing the growing complexity of software systems. The discussion extends to the future implications of formal composability in the context of machine learning and automated software development, emphasizing its potential to transform the way complex systems are designed and verified. Finally, the essay provides a comprehensive self-study path for those interested in mastering Haskell, category theory, and their applications in various domains, including secure coding, asynchronous systems, and blockchain technology.
-
Porkbun is an amazingly awesome ICANN accredited domain name registrar based out of the Pacific Northwest. We're different, we're easy, and we're affordable. Use us, you won't be sorry. If you don't use us we'll be sad, but we'll still love you.
-
This page indexes all the WWW resources associated with the Jargon File and its print version, The New Hacker’s Dictionary. It’s as official as anything associated with the Jargon File gets.
On 23 October 2003, the Jargon File achieved the dubious honor of being cited in the SCO-vs.-IBM lawsuit. See the FUD entry for details.
- Browse the Jargon File.
- What’s new in the Jargon File.
- Other HTML-accessible versions of the Jargon File
- Search for Jargon terms
- Download the Jargon File in different forms.
- How to add or change entries in the Jargon File
- So, you want to quote the Jargon File?
- So, you want to mirror or re-package the Jargon File?
- View the Jargon File’s change log
- Read this if you think The New Hacker’s Dictionary is bogus
- Related resources
- The Book on the File: The New Hacker’s Dictionary
- Order the book version from MIT Press
-
- Shell history sync: Sync your shell history to all of your machines, wherever they are
- End-to-end encryption: All data is encrypted, and can only be read by you
- Efficient search: Search decades of shell history, and recall it in an instant. Atuin offers configurable full text or fuzzy search, filterable by host, directory, etc.
- Open source: Atuin is open source with a permissive license, and has a growing community
- Data import: Bring your existing history with you - Atuin supports importing from a wide variety of formats
- Store extra context: Atuin stores extra context with your commands - working directory, exit code, and more!
-
If you’ve used SwiftUI for long enough, you’ve probably noticed that the public Swift APIs it provides are really only half the story. Normally inconspicuous unless something goes exceedingly wrong, the private framework called AttributeGraph tracks almost every single aspect of your app from behind the scenes to make decisions on when things need to be updated. It would not be much of an exaggeration to suggest that this C++ library is actually what runs the show, with SwiftUI just being a thin veneer on top to draw some platform-appropriate controls and provide a stable interface to program against. True to its name, AttributeGraph provides the foundation of what a declarative UI framework needs: a graph of attributes that tracks data dependencies.
Mastering how these dependencies work is crucial to writing advanced SwiftUI code. Unfortunately, being a private implementation detail of a closed-source framework means that searching for AttributeGraph online usually only yields results from people desperate for help with their crashes. (Being deeply unpleasant to reverse-engineer definitely doesn’t help things, though some have tried.) Apple has several videos that go over the high-level design, but unsurprisingly they shy away from mentioning the existence of AttributeGraph itself. Other developers do, but only fleetingly.
This puts us in a real bind! We can
Self._printChanges()
all day and still not understand what is going on, especially if problems we have relate to missing updates rather than too many of them. To be honest, figuring out what AttributeGraph is doing internally is not all that useful unless it is not working correctly. We aren’t going to be calling those private APIs anyways, at least not easily, so there’s not much point exploring them. What’s more important is understanding what SwiftUI does and how the dependencies need to be set up to support that. We can take a leaf out of the generative AI playbook and go with the approach of just making guesses as how things are implemented. Unlike AI, we can also test our theories. We won’t know whether our speculation is right, but we can definitely check to make sure we’re not wrong! -
Create stunning spatial computing and augmented reality experiences with professional-grade tools. Design for iOS, Vision Pro, and beyond — all from your Mac.
Transform Your Creative Vision into an Immersive Reality
Scenery is your professional-grade spatial design studio for creating stunning XR experiences. Built for creators who want to push the boundaries XR creation and distribution, this powerful Apple-native XR editor brings your immersive stories to life across macOS, iOS, and Vision Pro.
- Professional XR Editor: Craft high-fidelity spatial experiences with an intuitive interface
- Cross-Platform Creation: Design once, deploy everywhere - from Mac to Mobile and Vision Pro
- No-Code Required: Built for designers and artists, no programming experience needed
- Multi-Sensory Tools: Create with spatial audio, custom haptics, and stunning visuals
- Instant Distribution: Share experiences instantly through AR App Clips – no app download required
- Native Performance: Leveraging the latest and greatest ARKit and RealityKit for best-in-class tracking
- Custom Development: Dive deeper into experience creation with features for professionals like JS scripting and custom shader
-
Scenery democratizes the creation and distribution of high-quality immersive experiences for Mobile AR and Apple Vision Pro. Craft multi-sensory content in a blink and share it instantly and without app download.
-
To deepen the public conversation about how AI models should behave, we’re sharing the Model Spec, our approach to shaping desired model behavior.
The Model Spec outlines the intended behavior for the models that power OpenAI's products, including the API platform. Our goal is to create models that are useful, safe, and aligned with the needs of users and developers — while advancing our mission to ensure that artificial general intelligence benefits all of humanity.
To realize this vision, we need to:
- Iteratively deploy models that empower developers and users.
- Prevent our models from causing serious harm to users or others.
- Maintain OpenAI's license to operate by protecting it from legal and reputational harm.
These goals can sometimes conflict, and the Model Spec helps navigate these trade-offs by instructing the model to adhere to a clearly defined chain of command.
We are training our models to align to the principles in the Model Spec. While the public version of the Model Spec may not include every detail, it is fully consistent with our intended model behavior. Our production models do not yet fully reflect the Model Spec, but we are continually refining and updating our systems to bring them into closer alignment with these guidelines.
The Model Spec is just one part of our broader strategy for building and deploying AI responsibly. It is complemented by our usage policies, which outline our expectations for how people should use the API and ChatGPT, as well as our safety protocols, which include testing, monitoring, and mitigating potential safety issues.
By publishing the Model Spec, we aim to increase transparency around how we shape model behavior and invite public discussion on ways to improve it. Like our models, the spec will be continuously updated based on feedback and lessons from serving users across the world. To encourage wide use and collaboration, the Model Spec is dedicated to the public domain and marked with the Creative Commons CC0 1.0 deed.
-
Supercharge your marketing campaigns with Kokai, the new AI-driven platform experience by The Trade Desk. "Kokai," which means "open waters" in Japanese and is slang for "open for business," sets new benchmarks for transparency and efficiency to digital advertising.
Unlike the walled-garden approach taken by some major tech companies, integrating with Kokai empowers you to take full advantage of programmatic advertising. This ensures that you focus on impressions and getting the best value in media buying, rather than chasing cheap reach. Our platform enables you to reach your target audience on the open internet through an omnichannel strategy that includes Connected TV (CTV) and retail media.
To help you navigate and invest in the biggest opportunities on the open internet, we've built Kokai on five key principles:
- The most effective marketing begins with seeds
- Upgrading the trader toolkit to amplify your strategic value
- Doubling down on quality inventory at scale
- Data and insights to make smarter decisions
- Matching the new intuitive UI with streamlined GraphQL API
With Kokai, we provide you with the right data and insights to enhance your digital advertising efforts. Our audience-based approach centers on the concept of "seeds." To unlock new metrics and optimize data-driven decisioning for your ad-group strategies, start by creating a seed using first-party data through Galileo or tags to incorporate data from your CRM, app, or site.
Kokai goes beyond data. We enable you to make smarter decisions with in-platform contextual insights, actionable data visualizations, and new measurement indexes. Use our REST or GraphQL API to tap into the full functionality of Kokai.
-
Use Vision Pro Demo Fit to measure a guest’s face and determine their vision needs to choose the most suitable Light Seal, Head Band, and Optical Inserts for the best possible Apple Vision Pro demo experience.
-
This guide will go through the basics of using Lua in Nvim. It is not meant to be a comprehensive encyclopedia of all available features, nor will it detail all intricacies. Think of it as a survival kit — the bare minimum needed to know to comfortably get started on using Lua in Nvim.
An important thing to note is that this isn't a guide to the Lua language itself. Rather, this is a guide on how to configure and modify Nvim through the Lua language and the functions we provide to help with this. Take a look at luaref and lua-concepts if you'd like to learn more about Lua itself. Similarly, this guide assumes some familiarity with the basics of Nvim (commands, options, mappings, autocommands), which are covered in the user-manual.
-
OpenRewrite is an open-source automated refactoring ecosystem for source code, enabling developers to effectively eliminate technical debt within their repositories.
It consists of an auto-refactoring engine that runs prepackaged, open-source refactoring recipes for common framework migrations, security fixes, and stylistic consistency tasks – reducing your coding effort from hours or days to minutes. Build tool plugins like the OpenRewrite Gradle plugin and the OpenRewrite Maven plugin help you run these recipes on one repository at a time.
While the original focus was on the Java language, the OpenRewrite community is continuously expanding language and framework coverage. Thousands of great individuals and teams are working together to make software seamless to update and continuously secure.
-
A Lossless Semantic Tree (LST) is a tree representation of code. Unlike the traditional Abstract Syntax Tree (AST), OpenRewrite's LST offers a unique set of characteristics that make it possible to perform accurate transformations and searches across a repository:
- Type-attributed. Each LST is imbued with type information. For example, when referencing a field, the source code may just refer to it as
myField
. The OpenRewrite LST formyField
, on the other hand, would contain additional information about what the type ofmyField
is, even if it isn't defined in the same source file or even the same project. - Format-preserving. Whitespace before and after LSTs are preserved in the tree so the tree can be printed out to reconstitute the original source code without clobbering formatting. Additionally, refactoring operations that insert code are sensitive to the local style of the code around them and match the local style.
- Type-attributed. Each LST is imbued with type information. For example, when referencing a field, the source code may just refer to it as
-
Over 18 weeks in Summer 2023, 33 researchers from diverse fields including architecture, law, game design, technology, media, art, and workplace safety engaged in collaborative speculation, discovery, design, invention, and creative production to explore protocols, boadly construed, from various angles.
Their findings, catalogued here, comprise a variety of textual and non-textual artifacts (including art works, game designs, and software), organized around a set of research themes: built environments, danger and safety, dense hypermedia, technical standards, web content addressability, authorship, swarms, protocol death, and (artificial) memory.
-
-
MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
MCP helps you build agents and complex workflows on top of LLMs. LLMs frequently need to integrate with data and tools, and MCP provides:
- A growing list of pre-built integrations that your LLM can directly plug into
- The flexibility to switch between LLM providers and vendors
- Best practices for securing your data within your infrastructure
-
In this post, we’ll take a look at how to customize the macOS menu bar for a SwiftUI app, using SwiftUI tools like
CommandMenu
andCommandGroup
.Although SwiftUI helps you start working on new platforms, you will run into many platform-specific concepts and challenges as you build your first few apps on the new platform.
One thing that was new to me as I started building apps for macOS, was how to customize the menu bar items for your app.
SwiftUI makes a good job of keeping this simple, with the concept of commands. Let’s take a look at how we can add, remove and replace items in the main menu.
-
I've been deep in the weeds connecting XcodeProj to XcodeGraph, turning raw .xcworkspace or .xcodeproj data into a delightful graph structure.
You might be wondering, "Why do we need a graph for something as 'simple' as an Xcode project?" Let's just say that once you start exploring advanced analysis, partial builds, or illusions hidden in tangly references, you'll be glad everything ends up in a single, coherent "map".
In this post, I'll walk through how the mapping process works, which pitfalls we cover, and why you might want to harness it for your own projects.
Sometimes, your codebase feels like an overgrown secret garden: you open Xcode, spot multiple targets referencing frameworks, Swift packages, or script phases, but the big picture is elusive. XcodeGraph helps transform that hidden mess into a directed acyclic graph (DAG)—in simpler terms, a neat diagram of who depends on what. But it’s more than just a DAG: XcodeGraph provides higher-level models and user-friendly properties that abstract away the complexity, making the project structure easier to understand and interact with.
In contrast, XcodeProj offers a near 1:1 mapping of the .pbxproj format to Swift. It’s precise but low-level, exposing the raw details without much abstraction. That’s where XcodeGraphMapper comes in: it’s the pipeline that unifies this raw data into the more accessible structure that XcodeGraph provides.
The benefits are huge. Imagine wanting to only test modules that changed or to inspect a suspicious missing framework from a test target. Once your project is represented as a DAG with rich, user-friendly models, you can see those connections in a single pass. No more rummaging through thousands of lines in .pbxproj, just a straightforward structure to query or visualize.
-
Djot is a light markup syntax. It derives most of its features from commonmark, but it fixes a few things that make commonmark's syntax complex and difficult to parse efficiently. It is also much fuller-featured than commonmark, with support for definition lists, footnotes, tables, several new kinds of inline formatting (insert, delete, highlight, superscript, subscript), math, smart punctuation, attributes that can be applied to any element, and generic containers for block-level, inline-level, and raw content. The project began as an attempt to implement some of the ideas I suggested in Beyond Markdown.
-
If an Apple Account is only used for making purchases, those purchases can be migrated to a primary Apple Account to consolidate them.
- On your iPhone or iPad, open the Settings app.
- Tap your name, then tap Media & Purchases.
- Tap View Account. You might be asked to sign in.
- Scroll down, then tap Migrate Purchases.
- Review the information about both accounts, then follow the tasks to complete the migration of purchases to the primary account.
- When complete, you’ll see “Purchases Have Been Migrated”. The email addresses associated with both accounts will also receive a confirmation email.
- Be sure to check your Media & Purchases settings, sign out of the secondary Apple Account, and then sign in with the primary Apple Account.
You might not see Migrate Purchases if you’re not eligible. Check what to do before you migrate purchases.
If you used the secondary Apple Account for Media & Purchases on any other devices –– including Apple TV, HomePod, or other devices with Apple TV app or Apple Music app –– sign out of the secondary Apple Account. Then sign in with the primary Apple Account that the purchases were migrated to.
The secondary Apple Account can no longer be used for Media & Purchases.
If you no longer want your purchases migrated, learn how to undo a migration of purchases in order to make the secondary Apple Account useable again.
-
You can choose to migrate apps, music, and other content you’ve purchased from Apple on a secondary Apple Account to a primary Apple Account. The secondary Apple Account might be an account that’s used only for purchases. You’ll need access to the primary email address or phone number and password for both accounts, and neither account should be shared with anyone else. Learn more about how to migrate purchases.
- At the time of migration, the Apple Account signed in for use with iCloud and most features on your iPhone or iPad will be referred to as the primary Apple Account.
- At the time of migration, the Apple Account signed in just for use with Media & Purchases will be referred to as the secondary Apple Account.
-
This guide will walk you through creating and packaging a standalone command-line application that can be installed with pipx, a tool creating and managing Python Virtual Environments and exposing the executable scripts of packages (and available manual pages) for use on the command-line.
-
-
Much of our existing tooling is geared towards CI integration, whether through our Fastlane and Gradle plugins or manually calling the API. A typical CI flow involves building your app and then uploading it to Emerge. Then, we analyze the app and report results back to the originating pull request.
Part of the CLI's functionality will be geared towards making the Emerge integration easier. But the CLI also addresses a current limitation of Emerge: we can only see what's included in your upload.
Emerge analyzes the compiled result of an app, meaning we have limited knowledge of the source code. We can suggest insights to fix for the app, but we rely on the developer to implement the fixes. And we can't suggest fixes tailored to the codebase itself, only generalized suggestions that won't work for every project.
Now, with a CLI, we can finally get the best of both worlds and do much more. Our vision is to make using Emerge as easy as possible, and also provide commands that an everyday mobile developer can find useful.
-
I’m writing a book about Instruments. The book will show you how to find the most important information from the Instruments data, such as the code causing problems. Some of the things you will learn in the book include the following:
- Using the Leaks instrument to find memory leaks and find the code allocating the leaked memory.
- Using the Allocations instrument to find how much memory your app uses and find the code that allocates the most memory.
- Using the Time Profiler instrument to find the slow spots in your code.
- Using the SwiftUI instruments to find the views that are redrawn the most and the view properties triggering those redraws.
After reading this book you will be able to use Instruments and find the code causing problems in your app. Finding the code you need to fix is the first step to making apps that run faster, use less memory, and don’t leak memory
-
Anonymous Github allows you to simply anonymize your Github repository. Several anonymization options are available to ensure that you do not break the double-anonymize such as removing links, images or specific terms. You still keep control of your repository, define an expiration date to make your repository unavailable after the review.
-
When building with Swift, Apple provides a comprehensive toolchain through the Xcode installation. Running
swift run
seamlessly builds and executes your code using the Swift compiler, eliminating concerns about the underlying toolchain. However, additional tools like -SwiftFormat or swift-openapi-generator may be required. These tools need to be installed on your system, raising the question of how to manage their installation—not only for developers' environments but also for CI/CD pipelines. In this blog post, we’d like to introduce you to Mise, a tool that not only addresses the installation and distribution of tools but also ensures they are activated deterministically so that everyone is using the same version of the tools. -
Ploomber is the fastest way to build data pipelines ⚡️. Use your favorite editor (Jupyter, VSCode, PyCharm) to develop interactively and deploy ☁️ without code changes (Kubernetes, Airflow, AWS Batch, and SLURM). Do you have legacy notebooks? Refactor them into modular pipelines with a single command.
-
Success in the LLM space isn't about building the most sophisticated system. It's about building the right system for your needs. Start with simple prompts, optimize them with comprehensive evaluation, and add multi-step agentic systems only when simpler solutions fall short.
When implementing agents, we try to follow three core principles:
- Maintain simplicity in your agent's design.
- Prioritize transparency by explicitly showing the agent’s planning steps.
- Carefully craft your agent-computer interface (ACI) through thorough tool documentation and testing.
Frameworks can help you get started quickly, but don't hesitate to reduce abstraction layers and build with basic components as you move to production. By following these principles, you can create agents that are not only powerful but also reliable, maintainable, and trusted by their users.
-
To properly test state preservation with
SceneStorage
in Xcode:- Run the app in the Xcode simulator.
- Change the state (e.g., switch tabs or navigate within the app).
- Press the Home button in the simulator to send the app to the background.
- Press the Stop button in Xcode to terminate the app.
- Run the app again in Xcode and check if the state is preserved.
-
uv supports building Python packages into source and binary distributions via
uv build
and uploading them to a registry withuv publish
. -
This tutorial walks you through how to package a simple Python project. It will show you how to add the necessary files and structure to create the package, how to build the package, and how to upload it to the Python Package Index (PyPI).
-
-
Create unique invitations and bring people together for life’s most exciting moments. Customize the background of your invitation with a photo from your library, or choose an emoji background to bring your event to life. Easily see who is attending and make sure you never miss a moment by adding a Shared Album directly to the event. Whether you’re attending an event or hosting one yourself, Invites makes it easy to get the party started.
-
marimo is an open-source reactive notebook for Python — reproducible, git-friendly, executable as a script, and shareable as an app.
-
A Mac laptop with Apple silicon automatically turns on and starts up when you open its lid or connect it to power. With macOS Sequoia 15 or later, you can change this behavior without affecting your ability to use your keyboard or trackpad to turn on your Mac.
- Make sure that your Mac laptop with Apple silicon is using macOS Sequoia or later.
- Open the Terminal app, which is in the Utilities folder of your Applications folder.
- Type one of these commands in Terminal, then press Return:
- To prevent startup when opening the lid or connecting to power:
sudo nvram BootPreference=%00
- To prevent startup only when opening the lid:
sudo nvram BootPreference=%01
- To prevent startup only when connecting to power:
sudo nvram BootPreference=%02
- To prevent startup when opening the lid or connecting to power:
- Type your administrator password when prompted (Terminal doesn’t show the password as it's typed), then press Return.
To undo any of the previous commands and reenable automatic startup when opening the lid or connecting to power, enter
sudo nvram -d BootPreference
in Terminal. -
These models perform best with straightforward prompts. Some prompt engineering techniques, like instructing the model to "think step by step," may not enhance performance (and can sometimes hinder it). Here are some best practices:
- Developer messages are the new system messages: Starting with
o1-2024-12-17
, reasoning models supportdeveloper
messages rather thansystem
messages, to align with the chain of command behavior described in the model spec. - Keep prompts simple and direct: The models excel at understanding and responding to brief, clear instructions.
- Avoid chain-of-thought prompts: Since these models perform reasoning internally, prompting them to "think step by step" or "explain your reasoning" is unnecessary.
- Use delimiters for clarity: Use delimiters like markdown, XML tags, and section titles to clearly indicate distinct parts of the input, helping the model interpret different sections appropriately.
- Limit additional context in retrieval-augmented generation (RAG): When providing additional context or documents, include only the most relevant information to prevent the model from overcomplicating its response.
- Try zero shot first, then few shot if needed: Reasoning models often don't need few-shot examples to produce good results, so try to write prompts without examples first. If you have more complex requirements for your desired output, it may help to include a few examples of inputs and desired outputs in your prompt. Just ensure that the examples align very closely with your prompt instructions, as discrepancies between the two may produce poor results.
- Provide specific guidelines: If there are ways you explicitly want to constrain the model's response (like "propose a solution with a budget under $500"), explicitly outline those constraints in the prompt.
- Be very specific about your end goal: In your instructions, try to give very specific parameters for a successful response, and encourage the model to keep reasoning and iterating until it matches your success criteria.
- Markdown formatting: Starting with
o1-2024-12-17
, reasoning models in the API will avoid generating responses with markdown formatting. To signal to the model when you do want markdown formatting in the response, include the stringFormatting re-enabled
on the first line of yourdeveloper
message.
- Developer messages are the new system messages: Starting with
-
Radicle is an open source, peer-to-peer code collaboration stack built on Git. Unlike centralized code hosting platforms, there is no single entity controlling the network. Repositories are replicated across peers in a decentralized manner, and users are in full control of their data and workflow.
-
Our upgrade path from bash to a better language and runtime.
-
This notebook demonstrates how to use Qwen2.5-VL's agent function call capabilities to interact with a mobile device. It showcases the model's ability to generate and execute actions based on user queries and visual context.
-
Color Oracle is a free color blindness simulator for Windows, Mac and Linux. It takes the guesswork out of designing for color blindness by showing you in real time what people with common color vision impairments will see.
Color Oracle applies a full screen color filter to art you are designing, independently of the software in use. Eight percent of all males are affected by color vision impairment – make sure that your graphical work is readable by the widest possible audience.
-
Swift Build contains support for building software using a number of Apple-specific tools and product types. Now that it’s been contributed to the Swift project, we’d like to take a more principled approach to how this platform-specific support is organized as part of our efforts to provide first class support for additional non-Apple platforms. We’ve moved support for a number of tools, like the Asset Catalog and Core Data compilers, into Swift Build’s SWBApplePlatform plugin, and we intend to continue this process of separating support for Apple platform technologies from the core build engine implementation. Even though this platform support is moving into plugins for organizational purposes, it will remain a part of the open source Swift Build distribution, in order to ensure that open source clients like SwiftPM can continue leveraging it.
-
Swift continues to grow in popularity as a cross-platform language supporting a wide variety of use cases, with support on a variety of embedded devices, form factors that encompass wearables to server, and a wide variety of operating systems. As Swift expands, there’s value in investing in matching cross-platform build tools that provide a powerful, consistent, and flexible experience across the ecosystem.
As a foundational step in this new chapter of Swift build technologies, today Apple is open sourcing Swift Build, a powerful and extensible build engine that provides a set of build rules for building Swift projects. Swift Build is the engine used by Xcode, which supports millions of apps in the App Store as well as the internal build process for Apple’s own operating systems. The open source repository also includes support for targeting Linux and Windows.
-
We're working on a BYOC (Bring Your Own Cloud) version of Unison Cloud. By launching a few containers in your VPC (or even on-prem), you'll get a Unison Cloud cluster anywhere in the world, in minutes.
All data stays with you; we never see your data or the service requests sent to deployed services. We only operate a lightweight multi-tenant control plane for managing these Unison Cloud clusters, without access to any of the data inside them. This is good for security and avoids the outgoing bandwidth costs cloud providers charge for data exiting your VPC.
We're planning a free tier for BYOC clusters up to a few nodes in size, suitable for prototypes and trying out the Unison Cloud experience.
-
-
Access your S3 storage from the Files app, Finder & other apps on your iPhone, iPad or Mac with this tool from the developer of the highly acclaimed Working Copy.
Configuration is fast making your S3 buckets readily available in the filesystem alongside regular cloud storage. Files are downloaded as you open them and changes are uploaded back to S3.
-
SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications
Self-attention has become a defacto choice for capturing global context in various vision applications. However, its quadratic computational complexity with respect to image resolution limits its use in real-time applications, especially for deployment on resource-constrained mobile devices. Although hybrid approaches have been proposed to combine the advantages of convolutions and self-attention for a better speed-accuracy trade-off, the expensive matrix multiplication operations in self-attention remain a bottleneck. In this work, we introduce a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations with linear element-wise multiplications. Our design shows that the key-value interaction can be replaced with a linear layer without sacrificing any accuracy. Unlike previous state-of-the-art methods, our efficient formulation of self-attention enables its usage at all stages of the network. Using our proposed efficient additive attention, we build a series of models called "SwiftFormer" which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Our small variant achieves 78.5% top-1 ImageNet-1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2x faster compared to MobileViT-v2. Code
-
This document describes the L4S architecture, which enables Internet applications to achieve low queuing latency, low congestion loss, and scalable throughput control. L4S is based on the insight that the root cause of queuing delay is in the capacity-seeking congestion controllers of senders, not in the queue itself. With the L4S architecture, all Internet applications could (but do not have to) transition away from congestion control algorithms that cause substantial queuing delay and instead adopt a new class of congestion controls that can seek capacity with very little queuing. These are aided by a modified form of Explicit Congestion Notification (ECN) from the network. With this new architecture, applications can have both low latency and high throughput.
The architecture primarily concerns incremental deployment. It defines mechanisms that allow the new class of L4S congestion controls to coexist with 'Classic' congestion controls in a shared network. The aim is for L4S latency and throughput to be usually much better (and rarely worse) while typically not impacting Classic performance.
-
Everyone is talking about new advances in Artificial Intelligence (AI): texts written by ChatGPT, images drawn by Midjourney, and self-driving cars from Tesla.
When I was a sophmore I learned the fundamentals of my subject from John McCarthy, a founders of AI and a pioneer of programming. In the early days, AI debated the merits of two complementary methods: logic vs heuristics. Typical of the first is proving properties of programs, which became my research interest. Typical of the second is machine learning, the foundation of ChatGPT, Midjourney, and self-driving.
This talk will contrast the two approaches, discussing the benefits and risks of each, and how the first may curb shortcomings of the second.
Artists and writers are worried that AI will put them out of a job. One of the next professions on the list is programmers. Already, ChatGPT and related systems can do a credible job of generating simple programs, such as code for web pages. However, also already, such systems have demonstrated that they routinely write code containing known security bugs.
One possible scenario is that heuristic techniques will prove as adequate as humans—and far cheaper—at simple tasks, putting writers, artists, and programmers out of work. Bereft of new data to learn from, the machine learning applications will then fall into stagnation. They will be fine at producing articles, art, and code close to what has been produced before, but unable to produce anything original. And by then there may no longer be writers, artists, or programmers to hire, as who would study for a profession where no one can find work because they’ve been displaced by machines?
A different scenario is to pass laws to ensure that writers and artists are fairly recompensed when AI generates artifacts based on their work. Regarding code, the logical techniques have shown they can vastly improve reliability. Synthesising logical and heuristic techniques may lead to code that is both cheaper and more reliable. Programmers would shift from writing code to writing logical specifications, with AI helping to generate code proved to meet those specifications.
- AI machines aren’t ‘hallucinating’, but their makers are. Naomi Klein. The Guardian, 8 May 2023.
- The problem with counterfeit people. Daniel Dennett. The Atlantic, 16 May 2023.
- Will AI become the new McKinsey? Ted Chiang. The New Yorker (online), 4 May 2023.
- Xavier Leroy. Formal verification of a realistic compiler. Communications of the ACM, July 2009, 52(7), pages 107–115.
- Chris Newcombe, Tim Rath, Fan Zhang, Bogdan Munteanu, Marc Brooker, Michael Deardeuff. How Amazon Web Services Uses Formal Methods. Communications of the ACM, April 2015, 58(4), pages 66–73.
-
-
With its wide variety of window management tools, Moom makes moving and resizing windows fast, easy, and if you're as geeky as we are, even fun. Scroll down to learn more and see all the main features in action.
-
Services on macOS allow us to extend our app’s functionality to the entire system, enabling users to interact with our app’s features while working in other contexts without explicitly opening it. These services are accessible via the context menu or from an application's Services menu in the macOS menu bar.
-
Firmware updates are delivered automatically while your AirPods are charging and in Bluetooth range of your iPhone, iPad, or Mac that's connected to Wi-Fi . You can also use your iPhone, iPad, or Mac to check that your AirPods have the latest version.
If your AirPods don’t have the latest firmware version, you can update your firmware.
- Make sure that your AirPods are in Bluetooth range of your iPhone, iPad, or Mac that's connected to Wi-Fi.
- Put your AirPods in their charging case and close the lid.
- Plug the charging cable into your charging case, then plug the other end of the cable into a USB charger or port.
- Keep the lid of the charging case closed, and wait at least 30 minutes for the firmware to update.
- Open the lid of the charging case to reconnect your AirPods to your iPhone, iPad, or Mac.
- Check the firmware version again.
If you still can’t update your firmware, reset your AirPods, then try to update your firmware again.
-
This library provides a simple way to write and manage a blog that's hosted on Unison Cloud and Unison Share.
Here's the idea: you write each blog posts as a
Doc
value, making use of Unison's incredibleDoc
type to include images, links, hyperlinked Unison Code examples, and other rich content with ease. You keep all your blog posts in a project on Unison Share. Then you use this library to assemble them into a blog that gets deployed to Unison Cloud as a beautiful website complete with RSS/Atom feeds and email subscriptions. See this example blog for an idea of what your blog could look like.This library provides facilities for customizing the look and feel of your blog, or you can just use the defaults if you like.
-
As part of our ongoing work highlighting what you can do with the Unison Cloud platform, we're excited to announce the Unison Blog Engine library. This library makes it incredibly easy to create and deploy a professional-looking blog, with a particular focus on supporting developers writing about technical concepts. The blog engine ships with an RSS/Atom feed, and supports email notifications out of the box.
-
It would be nice if there was a single place to go to look up all the terms, keywords, and annotations related to Swift concurrency. So here it is. By no means do you need to understand everything here to use concurrency successfully. Let me know what I forgot!
-
-
Dusa is a logic programming language designed by Rob Simmons and Chris Martens, the first implementation of finite-choice logic programming.
- If you’ve heard of Datalog (as implemented in systems like Soufflé), you may want to start by reading about how Dusa is datalog.
- If you’ve heard of answer set programming (as implemented in systems like Potassco), you may want to start by reading about how Dusa is answer set programming.
- If you have no familarity with either of these, that’s okay too! You may want to start by reading about how Dusa is a graph exploration language. Then you can take a look at some of the other introductions, or fiddle with some of the default examples.
- If you’re interested in the mathematics of finite-choice logic programming, the paper Finite-Choice Logic Programming by Martens, Simmons, and Michael Arntzenius may be of interest.
The easiest way to use Dusa is in our web editor. Dusa is also available as a command-line utility and JavaScript API via the Node package manager.
-
- Async Comms
- Honesty
- Psychological Safety
- Feedback
-
As a test case i wrote a C compiler and early tests show the preprocessor to be around 2x faster than Clang's along with being able to parse multiple translation units in the same process without threading issues and out of order declarations (can talk about that another time). To clarify, my project isn't some API compatible replacement to LLVM, it's just another backend that believes in a similar vision to early LLVM.
- Sea of nodes IR (opposed to the LLVM or GCC's SSA CFG)
- Simple type system
- Fast compile times
- Thread-safe modules (can generate and compile two functions at the same time)
- Codeview debug info (windows debuggers will handle it just fine)
- Capable of both JITing and AOT (there's also early work on directly outputting linked executables thus bypassing the need for a conventional linker)
-
-
This site acts as a supplement to the Swift Package Index by providing build status for additional platforms such as Android, Windows, and Musl.
-
In recent weeks, the Skip team has submitted patches to numerous Swift projects to add Android support to their packages. We’ve been tracking the progress of Android build-ability on our swift-everywhere.org web site, which catalogs a list of many popular Swift packages and whether they compile for Android. At the time of writing, nearly two thousand Swift packages are building successfully for Android, with more being added every day.
This article will go over what our experience porting Swift packages has taught us, and how you can apply this knowledge to turn your parochial Apple-only Swift package into a universal multi-platform package that can build for not just iOS and macOS, but also for Android.
-
-
MLX Swift is a Swift API for MLX.
MLX is an array framework for machine learning on Apple silicon. MLX Swift expands MLX to the Swift language, making research and experimentation easier on Apple silicon.
-
The Swift programming language has a lot of potential to be used for machine learning research because it combines the ease of use and high-level syntax of a language like Python with the speed of a compiled language like C++.
MLX is an array framework for machine learning research on Apple silicon. MLX is intended for research and not for production deployment of models in apps.
MLX Swift expands MLX to the Swift language, making experimentation on Apple silicon easier for ML researchers.
As part of this release we are including:
- A comprehensive Swift API for MLX core
- Higher level neural network and optimizers packages
- An example of text generation with Mistral 7B
- An example of MNIST training
- A C API to MLX which acts as the bridge between Swift and the C++ core
We are releasing all of the above under a permissive MIT license.
This is a big step to enable ML researchers to experiment using Swift.
MLX has several important features for machine learning research that few if any existing Swift libraries support. These include:
- Native support for hardware acceleration. MLX can run compute intensive operations on the CPU or GPU.
- Automatic differentiation for training neural networks and the gradient-based machine learning models
For more information on MLX see the documentation.
The Swift programming language is fast, easy-to-use, and works well on Apple silicon. With MLX Swift, you now have a researcher-friendly machine learning framework with the ability to easily experiment on different platforms and devices.
-
Tuist Registry is a new feature that optimizes the resolution of Swift packages in your projects. Gone are the days when you had to install the full git history of any package you wanted to use – instead, Tuist Registry, built on top of the Swift Package Registry standard, allows you to download only source archives of the package versions you need – saving both time and disk space, locally or on the CI, and making the resolution more deterministic and reliable. The Tuist Registry mirrors the Swift Package Index and is available for any open source Swift package in the community – served from a global storage for low latency.
-
Watch, Upload and Share the Best Immersive Video
-
SwiftUI’s color mixing function offers huge possibilities to enhance the visual appeal of your applications. Any view that utilizes colors as a status indicator can greatly benefit from color mixing. For instance, the priority indicator can represent the priority of a task, calendar event, or any other entity that facilitates calculating the mixing fraction.
let eventPriority: Double let maximalPriority: Double Color.gray.mix(with: .red, by: eventPriority / maximalPriority)
We can begin by utilizing the system colors provided by SwiftUI and then apply color mixing based on user input to create visually appealing and dynamic color schemes in our applications.
-
Boundary is a library which helps managing and restraining cross-module dependencies in Elixir projects. A few examples of the things you can do with boundary include:
- Prevent invocations from the context layer to the web layer
- Prevent invocations from the web layer to internal context modules
- Prevent usage of Phoenix and Plug in the context layer
- Limit usage of Ecto in the web layer to only Ecto.Changeset
- Allow
:mix
modules to be used only at compile time
-
-
-
-
Functional programming languages encourage expressing large parts of a program as declarative data flow pipelines, free of side-effects such as shared mutable state. Such pipelines traverse recursive data by pattern matching, and share the repetitive code of these traversals by defining higher-order functions. Writing programs in functional style eliminates large classes of programmer errors, hence higher-order functions and pattern matching have been adopted by most general purpose programming languages today.
However, pattern matching introduces new modes of failure as well: It is easy to forget a case, and input data that is not covered leads to a crash at runtime. Thus, a compiler should integrate a static program analysis to warn about such uncovered pattern-matches before the program is run.
A compiler should also generate fast code for programs involving higher-order functions. More than 30 years of practical research have evolved the Glasgow Haskell Compiler (GHC) into an industrial strength tool. This evolution brought forth a number of useful and efficient compiler optimisations that are informed by static higher-order analyses. However, the more proficient a higher-order analysis becomes, the harder it gets to explain its implementation to maintainers, let alone convince interested bystanders of its correctness.
In this thesis, I present two results of my work to improve GHC: the first is a static analysis for pattern-match coverage checking that is both more efficient and more precise than the state of the art; the second is a design pattern for deriving static higher-order analyses and dynamic semantics alike from a generic denotational interpreter, in order to share intuition and correctness proofs. This design pattern generalises Cousot’s seminal work on trace-based abstract interpretation to higher-order analyses such as GHC’s Demand Analysis.
-
In this episode we introduce you to a part of our bodies that was invisible to Western scientists until about five years ago; it’s called "the interstitium," a vast network of fluid channels inside the tissues around our organs that scientists have just begun to see, name, and understand. Along the way we look at how new technologies rub up against long-standing beliefs, and how millions of scientists and doctors failed to see what was right in front (and inside!) of their noses. We also find out how mapping the anatomy of this hidden infrastructure may help solve one of the fundamental mysteries of cancer, and perhaps provide a bridge between ancient and modern medicine.
-
This is a book about large language models. As indicated by the title, it primarily focuses on foundational concepts rather than comprehensive coverage of all cutting-edge technologies. The book is structured into four main chapters, each exploring a key area: pre-training, generative models, prompting techniques, and alignment methods. It is intended for college students, professionals, and practitioners in natural language processing and related fields, and can serve as a reference for anyone interested in large language models.
-
Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates.
We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (≥ 100 fps) novel-view synthesis at 1080p resolution.
First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.
-
TikTok and ByteDance Ltd. apps are no longer available in the United States, and visitors to the United States might have limited access to features.
Apple is obligated to follow the laws in the jurisdictions where it operates. Pursuant to the Protecting Americans from Foreign Adversary Controlled Applications Act, apps developed by ByteDance Ltd. and its subsidiaries — including TikTok, CapCut, Lemon8, and others — will no longer be available for download or updates on the App Store for users in the United States starting January 19, 2025.
-
Once the idea sinks in, you’ll start seeing all sorts of cool things you can do with optics to generate code.
Prism
s generalize running initializer code. ATraversal
overCode
can be implemented as a loop. And since all the sizes are known statically, if you’re feeling plucky, you can decide to unroll the loop right there in the lens.Outside of the context of
Code
, the realization that optics are this general is still doing my head in. Something I love about working in Haskell is that I’m still regularly having my mind blown, even after a decade. -
Glama is a ChatGPT alternative for power users, with features like API gateway, agents, MCP, prompt templates, and more.
-
If there’s one point to take home: In a lot of languages modules are clearly a bolted on construction. They’re something added on later to fix “that library problem” and generally consist of the same “module <-> file” and “A module imports others to bring them into scope”. In ML that’s simply not the case. The module language is a rich, well thought out thing with it’s own methods of abstraction, composition, and even a notion of types!
-
ML is two languages in one: there is the core, with types and expressions, and there are modules, with signatures, structures and functors. Modules form a separate, higher-order functional language on top of the core. There are both practical and technical reasons for this stratification; yet, it creates substantial duplication in syntax and semantics, and it reduces expressiveness. For example, selecting a module cannot be made a dynamic decision. Language extensions allowing modules to be packaged up as first-class values have been proposed and implemented in different variations. However, they remedy expressiveness only to some extent, are syntactically cumbersome, and do not alleviate redundancy.
We propose a redesign of ML in which modules are truly first-class values, and core and module layer are unified into one language. In this "1ML", functions, functors, and even type constructors are one and the same construct; likewise, no distinction is made between structures, records, or tuples. Or viewed the other way round, everything is just ("a mode of use of") modules. Yet, 1ML does not require dependent types, and its type structure is expressible in terms of plain System Fω, in a minor variation of our F-ing modules approach. We introduce both an explicitly typed version of 1ML, and an extension with Damas/Milner-style implicit quantification. Type inference for this language is not complete, but, we argue, not substantially worse than for Standard ML.
An alternative view is that 1ML is a user-friendly surface syntax for System Fω that allows combining term and type abstraction in a more compositional manner than the bare calculus.
-
-
Enable Siri and Apple Intelligence to respond to a person’s questions and action requests for your app’s onscreen content.
When a user asks a question about onscreen content or wants to perform an action on it, Siri and Apple Intelligence can retrieve the content to respond to the question and perform the action. If the user explicitly requests it, Siri and Apple Intelligence can send content to supported third-party services. For example, someone could view a website and use Siri to provide a summary by saying or typing a phrase like “Hey Siri, what’s this document about?”
-
Make your app’s content and actions discoverable with system experiences like Spotlight, widgets, and enhanced action capabilities of Siri, powered by Apple Intelligence.
The App Intents framework provides functionality to deeply integrate your app’s actions and content with system experiences across platforms, including Siri, Spotlight, widgets, controls and more. With Apple Intelligence and enhancements to App Intents, Siri suggests your app’s actions to help people discover your app’s features and gains the ability to take actions in and across apps.
By adopting the App Intents framework, you allow people to personalize their devices by instantly using your app’s functionality with:
- Interactions with Siri, including those that use the personal context awareness and action capabilities of Apple Intelligence.
- Spotlight suggestions and search.
- Actions and automations in the Shortcuts app.
- Hardware interactions that initiate app actions, like the Action button and squeeze gestures on Apple Pencil.
- Focus to allow people to reduce distractions.
-
Bekind Labs is a global leader in digital innovation, empowering people to create meaningful things and grow through kindness and collaboration.
-
TikTokWonder what will happen to this link on Sunday 01/19/2025
TikTok is THE destination for mobile videos. On TikTok, short-form videos are exciting, spontaneous, and genuine. Whether you’re a sports fanatic, a pet enthusiast, or just looking for a laugh, there’s something for everyone on TikTok. All you have to do is watch, engage with what you like, skip what you don’t, and you’ll find an endless stream of short videos that feel personalized just for you. From your morning coffee to your afternoon errands, TikTok has the videos that are guaranteed to make your day.
-
You can add sub-issues to an issue to break down larger pieces of work into tasks. Your sub-issues show their relationship to the parent issue allowing you to track your work across GitHub. Parent issues and sub-issue progress is also available in your projects, allowing you to build views, filter, and group by parent issue.
Your sub-issues can themselves contain sub-issues, allowing you to create full hierarchies of issues that visualize entire projects or pieces of work and show the relationships between your issues.
You can add up to 100 sub-issues per parent issue and create up to eight levels of nested sub-issues.
-
OrbStack is the fast, light, and easy way to run Docker containers and Linux. Develop at lightspeed with our Docker Desktop alternative.
-
The term was coined by the programmers at MIT's Project MAC. According to Fernando J. Corbató, who worked on Project MAC around 1963, his team was the first to use the term daemon, inspired by Maxwell's demon, an imaginary agent in physics and thermodynamics that helped to sort molecules, stating, "We fancifully began to use the word daemon to describe background processes that worked tirelessly to perform system chores".2 Unix systems inherited this terminology. Maxwell's demon is consistent with Greek mythology's interpretation of a daemon as a supernatural being working in the background.
-
In macOS 13 and later, use
SMAppService
to register and controlLoginItems
,LaunchAgents
, andLaunchDaemons
as helper executables for your app. When converting code from earlier versions of macOS, use anSMAppService
object and select one of the following methods depending on the type of service your helper executable provides:- For
SMAppServices
initialized asLoginItems
, theregister()
andunregister()
APIs provide a replacement forSMLoginItemSetEnabled(_:_:)
. - For
SMAppServices
initialized asLaunchAgents
, theregister()
andunregister()
methods provide a replacement for installing property lists in~/Library/LaunchAgents
or/Library/LaunchAgents
. - For
SMAppServices
initialized asLaunchDaemons
, theregister()
andunregister()
methods provide a replacement for installing property lists in/Library/LaunchDaemons
.
- For
-
-
The fastest way to build and deploy server side Swift applications.
Swift Cloud is based on the premise that infrastructure should be defined along side your application, in the same language as your application. In our case, Swift. Define a new target, describe your infrastructure, and deploy it with a single command. There's no Dockerfiles, no Terrafrom configurations, no Node.js packages. Everything is defined in Swift and the complex configuration is handled behind the scenes, using modern architecture best practices.
-
Swift is a powerful and intuitive programming language that is designed to make writing and maintaining correct programs easier. Swift is growing and evolving, guided by a community-driven process referred to as the Swift evolution process, maintained by the Language Steering Group. This document outlines the Swift evolution process and how a feature grows from a rough idea into something that can improve the Swift development experience for millions of programmers.
-
Source code: https://github.com/ading2210/doompdf
-
Weak references are if anything overused as a quick fix for reference cycles, and each weak reference brings with it added complexity, since your program logic now needs to account for the nil possibility, and that also tends to be handled in "quick fix" ways like
guard let x else { return }
that are often but not always adequate. It is actively harmful to use weak references in places where they aren't needed, so it would be a bad idea to ever introduce them by default.I think it would be good to consider practices that reduce the likelihood of cycles becoming permanent memory leaks that don't require the use of weak references. While they will still have their place, in many cases there are more robust alternatives:
- Explicitly maintaining the lifetime of callback closures can prevent these closures from producing permanent reference cycles. For instance, the implementation of APIs that take single-shot callbacks can explicitly discard those callbacks after they've been used, by resetting the closure to
nil
or an empty closure. Even for APIs with multi-shot callbacks, there is usually in practice some explicit event that ends the need for the callback (operation being canceled, UI element being hidden or closed, etc.), and callback closures can be released explicitly in response to this event rather than rely on object death to eventually clean them up. - On the closure's side, it is often possible to capture only the parts of
self
that are needed rather thanself
in its entirety in order to avoid forming a cycle. If the closure needs access to the value of some immutable fields, it can capture those fields directly. If it needs to share some state withself
, that state could be placed in a separate object that doesn't also own the closure.
It isn't always possible to do these things instead, and they definitely require more effort than slapping
[weak self]
on a closure, but not relying on weak references can lead to overall more robust code. [/quote] - Explicitly maintaining the lifetime of callback closures can prevent these closures from producing permanent reference cycles. For instance, the implementation of APIs that take single-shot callbacks can explicitly discard those callbacks after they've been used, by resetting the closure to
-
-
ProjectionLab captures the important details in life that other retirement calculators miss. You'll find it easy and intuitive to build simple but rich financial plans that truly represent you, your loved ones, and the paths you choose.
- ✓ Define the milestones that matter to you
- ✓ Plan for financial independence and other goals
- ✓ Gauge your chance of success
- ✓ Reduce anxiety around your finances
-
With Zuckerberg going full Musk last week, we can no longer let billionaires control our digital public square.
Bluesky is an opportunity to shake up the status quo. They have built scaffolding for a new kind of social web. One where we all have more say, choice and control.
But it will take independent funding and governance to turn Bluesky’s underlying tech—the AT Protocol—into something more powerful than a single app. We want to create an entire ecosystem of interconnected apps and different companies that have people’s interests at heart.
Free Our Feeds will build a new, independent foundation to help make that happen.
This isn't just about bolstering one new social media platform. Our vision offers a pathway to an open and healthy social media ecosystem that cannot be controlled by any company or billionaire.
Join the movement to liberate social media. Will you donate?
-
Besides crashes, Xcode Preview often inexplicably freezes and fails to display preview effects.
When encountering these situations, since we don’t understand how Preview works, apart from handling obvious project compilation errors, we seem to only be able to solve other tricky issues by clearing caches and restarting Xcode.
To better understand the root causes of these issues, this article will explore how SwiftUI Preview works. While we can’t completely eliminate SwiftUI Preview problems, at least understanding the error logs can hopefully provide some insights for your daily development process.
- In Xcode 16, Preview’s working mechanism has undergone significant changes. If you’re interested in how Preview worked before Xcode 16, you can read Building Stable Preview Views — How SwiftUI Preview Works
- Starting from Xcode 16, normal Build and Run process shares build artifacts with Preview, but the execution process differs, with Preview using JIT to run the artifacts
- Preview has three different levels of rebuild operations, suitable for different degrees of source code file modifications.
-
Let's open our eyes to the visual side of things and explore how you can integrate on-device vision models into your app using MLX Swift.
- Add the MLX Vision Examples Package: Include the prebuilt utilities for vision models with MLXVLM and MLXLMCommon packages. I usually avoid dependencies but these definitely makes my life easier.
- Select a Pre-Trained Vision Model: Use a model from MLX’s registry. We will go with Google's PaliGemma model.
- Load the Model: Download and set up the model weights.
- Prepare Input: Preprocess images for inference.
- Run Inference: Generate results and display them.
-
By properly utilizing Algebraic Data Types (ADTs, not to be confused with abstract data types), you can transform certain types of invalid states from runtime errors into type-checking errors, making them an excellent method for representing data and managing state. Although ADTs may sound complex, they represent a fairly straightforward concept. They are composite types, meaning they reference or contain other types, and primarily consist of two basic categories: product and sum types. If you have experience with tuples, you've worked with product types, and if you've used booleans, you've dealt with sum types. While there are additional types, we'll focus on these two fundamental categories for now.
We'll be using typed Python, since ADTs really shine with the help of type-checking.
-
Block ads, trackers, and more
Wipr blocks ads, popups, trackers, cookie warnings, and other nasty things that make the web slow and ugly.
Websites in Safari will look clean, load fast, and stop invisibly tracking you. You’ll notice significant improvements to your battery life and data usage. Setup is a snap.
-
You are Apple. You want to make search work like magic in the Photos app, so the user can find all their “dog” pictures with ease. You devise a way to numerically represent the concepts of an image, so that you can find how closely images are related in meaning. Then, you create a database of known images and their numerical representations (“this number means car”), and find the closest matches. To preserve privacy, you put this database on the phone.
All of this, as cool as it might sound, is a solved problem. This “numerical representation” is called an embedding vector. A vector is a series of coordinates in a very high dimensional space. One dimension might measure how “dog-like” a thing is. Another might measure how “wild-like” a thing is. Dog-like and wild-like? That’s a wolf. We can compare distances using algorithms like cosine similarity. We are quite good at turning text into vectors, and only slightly worse at doing the same for images.
But then, your database grows. Your users don’t want all dogs, they want golden retrievers. You can no longer fit this database on a device. You’re tempted to store this database on your servers, and send the numerical representation computed on device off to them. This should be fine: vectorization is a lossy operation. But then you would know that Amy takes lots of pictures of golden retrievers, and that is a political disaster.
-
Magic is a package manager and virtual environment manager for any language, including Python and Mojo. It builds upon the conda and PyPI packaging ecosystems, which provide access to thousands of packages for Python and other languages, while also adding functionality for MAX and Mojo.
The
magic
CLI allows you to instantly launch code examples and create new projects that are fully contained and reproducible across systems. All the package dependencies and environment settings are magically managed for you.This page provides an introduction to basic
magic
commands. For a deep-dive into more features, see the Magic tutorial. -
This tutorial is an introduction to Mirth as it currently exists. It is written with the target audience of people who have some prior experience with concatenative programming languages, and some prior experience with statically typed programming languages. It is written with the intent of getting you set up to use Mirth as a programming language, and teaching you the language through a series of simple programs.
Mirth is a work in progress and is unstable. So although the information in this file will get you set up in the current version of mirth, it might not be necessarily true or accurate in the future. To mitigate this somewhat, I will include little notes in square brackets [like this] to let you know about changes that are planned for the future. Do take these with a grain of salt, like everything else, those plans may change as we continue to work on mirth.
If something in this file does not work in the current version of mirth in this repository, please raise an issue so we can fix this tutorial. Thank you!
-
Overall this release aims to have parity with Unison Local (which this will eventually replace) and its basically there with a few extra features. Here's a short breakdown of the current feature set:
- Browse project codebases: open definitions, clickable code, signature detail on hover, dependency indicators
- Search: search definitions in the same manner as Unison Local (later we'd love to get a search similar to the front page of Unison Share).
- Improved workspace management: resizable sidebar and new (albeit basic) split panes. Keyboard navigable (
w
followed by arrow keys to switch focus between panes). Keyboard shortcuts: much like Unison Local, the app has a lot of keyboard navigational shortcuts, though they aren't yet very discoverable.
-
In this blog post we'll go through the process of signing a CLI for macOS (Darwin).
You’ve built a portable CLI in a programming language like Zig, Rust, Go, or Swift. Through continuous integration you build binaries for the various supported platforms and architectures, and through a bash script or any other installation method, you make it easier for users to install it. However, after users install it and try to open it, they get an error:
"'your-cli' can't be opened because Apple cannot check it for malicious software."
.This error occurs because macOS has a security feature called Gatekeeper, which is designed to ensure that only trusted software runs on the system. When you try to open your CLI, Gatekeeper checks if the software has been signed and notarized by Apple. If it hasn’t, macOS will block it from running, displaying the error message mentioned above.
Homebrew, a popular package manager for macOS, attempts to apply an ad-hoc signature to the file using
codesign --sign -
to prevent this error from happening. However, we recommend you sign it with your own identity to ensure that the software is from a verified source.All the code examples in this blog post are available in this gist.
-
NeuralSVG generates vector graphics from text prompts with ordered and editable shapes. Our method supports dynamic conditioning, such as background color, which facilitating the generation of multiple color palettes for a single learned representation.
Vector graphics are essential in design, providing artists with a versatile medium for creating resolution-independent and highly editable visual content. Recent advancements in vision-language and diffusion models have fueled interest in text-to-vector graphics generation. However, existing approaches often suffer from over-parameterized outputs or treat the layered structure — a core feature of vector graphics — as a secondary goal, diminishing their practical use. Recognizing the importance of layered SVG representations, we propose NeuralSVG, an implicit neural representation for generating vector graphics from text prompts. Inspired by Neural Radiance Fields (NeRFs), NeuralSVG encodes the entire scene into the weights of a small MLP network, optimized using Score Distillation Sampling (SDS). To encourage a layered structure in the generated SVG, we introduce a dropout-based regularization technique that strengthens the standalone meaning of each shape. We additionally demonstrate that utilizing a neural representation provides an added benefit of inference-time control, enabling users to dynamically adapt the generated SVG based on user-provided inputs, all with a single learned representation. Through extensive qualitative and quantitative evaluations, we demonstrate that NeuralSVG outperforms existing methods in generating structured and flexible SVG.
-
To improve the lifespan of your battery, your iPhone learns from your daily charging habits.
A battery’s lifespan is related to its chemical age, which is more than just the length of time since the battery was assembled. A battery's chemical age results from a complex combination of several factors, including temperature history and charging pattern. All rechargeable batteries are consumable components that become less effective as they chemically age. As lithium-ion batteries chemically age, the amount of charge they can hold diminishes, resulting in reduced battery life and reduced peak performance. Learn more about iPhone battery and performance and how to maximize battery performance and lifespan.
Optimized Battery Charging is designed to reduce the wear on your battery and improve its lifespan by reducing the time your iPhone spends fully charged. It is available when Charge Limit is set to 100 percent. When the feature is enabled, your iPhone will delay charging past 80 percent in certain situations. Your iPhone uses on-device machine learning to learn your daily charging routine so that Optimized Battery Charging activates only when your iPhone predicts it will be connected to a charger for an extended period of time. The algorithm aims to ensure that your iPhone is still fully charged when unplugged.
When Optimized Battery Charging is active, a notification on the Lock Screen says when your iPhone will be fully charged. If you need to have your iPhone fully charged sooner, touch and hold the notification and then tap Charge Now.
-
Unleash your creativity in 3D anywhere & anytime.
Valence 3D has been built from the ground up to maximize fun and flow while designing 3D models on iPad and iPhone.
Packed with a powerful suite of polygon & subdivision surface modeling tools, whether a novice or expert, there's no limit to what you can create.
-
Percent-encode strings quickly
EncodeDecode is a lightweight menu bar app that makes URL encoding and decoding quick and effortless. Designed for developers and power users, it offers convenient access and customization options to fit your workflow.
-
Codable conformance makes enums in Swift even more powerful and versatile. While automatic synthesis covers many scenarios, customization and fully manual implementations offer the flexibility to handle complex requirements. With these tools, we can work confidently with serialized data, ensuring that our enums integrate seamlessly into real-world applications.
- Customizing case names
- Customizing associated value keys
- Excluding cases or values
-
A mechanism to communicate with a shared key's external system, synchronously or asynchronously.
A continuation is passed to
SharedReaderKey/load(context:continuation:)
so that state can be shared from an external system.Important: You must call a resume method exactly once on every execution path from the shared key it is passed to, i.e. in
SharedReaderKey/load(context:continuation:)
.Resuming from a continuation more than once is considered a logic error, and only the first call to
resume
will be executed. Never resuming leaves the task awaiting the call toShared/load()
in a suspended state indefinitely and leaks any associated resources.LoadContinuation
reports an issue if either of these invariants is violated. -
The LLM evals platform for enterprises. Humanloop gives you the tools that top teams use to ship and scale AI with confidence.
-
This option changes the size of the buffer that Git uses when pushing data to a remote over HTTP or HTTPS. If the data is larger than this size, libcurl, which handles the HTTP support for Git, will use chunked transfer encoding since it isn’t known ahead of time what the size of the pushed data will be.
Leaving this value at the default size is fine unless you know that either the remote server or a proxy in the middle doesn’t support HTTP/1.1 (which introduced the chunked transfer encoding) or is known to be broken with chunked data. This is often (erroneously) suggested as a solution for generic push problems, but since almost every server and proxy supports at least HTTP/1.1, raising this value usually doesn’t solve most push problems. A server or proxy that didn’t correctly support HTTP/1.1 and chunked transfer encoding wouldn’t be that useful on the Internet today, since it would break lots of traffic.
Note that increasing this value will increase the memory used on every relevant push that Git does over HTTP or HTTPS, since the entire buffer is allocated regardless of whether or not it is all used. Thus, it’s best to leave it at the default unless you are sure you need a different value.
-
An extremely fast Python package and project manager, written in Rust.
- 🚀 A single tool to replace
pip
,pip-tools
,pipx
,poetry
,pyenv
,twine
,virtualenv
, and more. - ⚡️ 10-100x faster than
pip
. - 🐍 Installs and manages Python versions.
- 🛠️ Runs and installs Python applications.
- ❇️ Runs scripts, with support for inline dependency metadata.
- 🗂️ Provides comprehensive project management, with a universal lockfile.
- 🔩 Includes a pip-compatible interface for a performance boost with a familiar CLI.
- 🏢 Supports Cargo-style workspaces for scalable projects.
- 💾 Disk-space efficient, with a global cache for dependency deduplication.
- ⏬ Installable without Rust or Python via
curl
orpip
. - 🖥️ Supports macOS, Linux, and Window
- 🚀 A single tool to replace
-
Documentation for the JSON Lines text file format
This page describes the JSON Lines text format, also called newline-delimited JSON. JSON Lines is a convenient format for storing structured data that may be processed one record at a time. It works well with unix-style text processing tools and shell pipelines. It's a great format for log files. It's also a flexible format for passing messages between cooperating processes.
The JSON Lines format has three requirements:
- UTF-8 Encoding
- Each Line is a Valid JSON Value
- Line Separator is
'\n'
-
We introduce AudioBench, a new benchmark designed to evaluate audio large language models (AudioLLMs). AudioBench encompasses 8 distinct tasks and 26 carefully selected or newly curated datasets, focusing on speech understanding, voice interpretation, and audio scene understanding. Despite the rapid advancement of large language models, including multimodal versions, a significant gap exists in comprehensive benchmarks for thoroughly evaluating their capabilities. AudioBench addresses this gap by providing relevant datasets and evaluation metrics. In our study, we evaluated the capabilities of four models across various aspects and found that no single model excels consistently across all tasks. We outline the research outlook for AudioLLMs and anticipate that our open-source code, data, and leaderboard will offer a robust testbed for future model developments.1
-
I've been making websites since 1993, from the tiniest projects, to apps at humongous scale, used and dissected almost all frameworks under the sun, have made many mistakes, especially when it comes to long-term maintenance, but why believe me? Here are some other folks talking about it in great detail:
-
The ImmutableData Programming Guide is inspired by “long-form” documentation like Programming with Objective-C2 and The Swift Programming Language3.
This guide includes the following chapters:
- Chapter 00: We discuss the history and evolution of Flux, Redux, and SwiftUI. In what ways did SwiftUI evolve in a similar direction as React? How can our
ImmutableData
architecture use ideas from React to improve product engineering for SwiftUI?
- Chapter 01: We build the
ImmutableData
module for managing the global state of our application. - Chapter 02: We build the
ImmutableUI
module for making our global state available to SwiftUI view components.
- Chapter 03: We build the data models of our Counter application: a simple SwiftUI app to increment and decrement an integer.
- Chapter 04: We build the component graph of our Counter application.
- Chapter 05: We build and run our Counter application.
- Chapter 06: We build the data models of our Animals application: a SwiftUI app to store a collection of data models with persistence to a local database.
- Chapter 07: We build a command-line utility for testing the data models of our Animals application without any component graph.
- Chapter 08: We build the component graph of our Animals application.
- Chapter 09: We build and run our Animals application.
- Chapter 10: We build the data models of our Quakes application: a SwiftUI app to fetch a collection of data models from a remote server with persistence to a local database.
- Chapter 11: We build a command-line utility for testing the data models of our Quakes application without any component graph.
- Chapter 12: We build the component graph of our Quakes application.
- Chapter 13: We build and run our Quakes application.
- Chapter 14: We update the data models of our Animals application to support persistence to a remote server.
- Chapter 15: We build an HTTP server for testing our new Animals application.
- Chapter 16: We build a command-line utility for testing the data models of our new Animals application without any component graph.
- Chapter 17: We build and run our new Animals application.
- Chapter 18: We learn about specialized data structures that can improve the performance of our applications when working with large amounts of data that is copied many times.
- Chapter 19: We run benchmarks to measure how the performance of immutable collection values compare to SwiftData.
- Chapter 20: Here are some final thoughts about what’s coming next.
- Chapter 00: We discuss the history and evolution of Flux, Redux, and SwiftUI. In what ways did SwiftUI evolve in a similar direction as React? How can our
-
“What is the best design pattern for SwiftUI apps?”
We hear this question a lot. Compared to the days when AppKit and UIKit were the dominant frameworks for product engineering in the Apple Ecosystem, Apple has been relatively un-opinionated about what kind of design pattern engineers should choose “by default” for their SwiftUI applications.
Many engineers in the SwiftUI community are currently evangelizing a “MVVM” design pattern. Other engineers are making the argument that SwiftUI is really encouraging a “MVC” design pattern. You might have also heard discussion of a “MV” design pattern. These design patterns share a fundamental philosophy: the state of your application is managed from your view components using imperative logic on mutable model objects. To put it another way, these design patterns start with a fundamental assumption of mutability that drives the programming model that product engineers must opt-in to when building graphs of view components. The “modern and declarative” programming model product engineers have transitioned to for SwiftUI is then paired with a “legacy and imperative” programming model for managing shared mutable state.
Over the course of this project, we present what we think is a better way. Drawing on over a decade of experience shipping products at scale using declarative UI frameworks, we present a new application architecture for SwiftUI. Using the Flux and Redux architectures as a philosophical “prior art”, we can design an architecture using Modern Swift and specialized for Modern SwiftUI. This architecture encourages declarative thinking instead of imperative thinking, functional programming instead of object-oriented programming, and immutable model values instead of mutable model objects.
We call this framework and architecture
ImmutableData
. We presentImmutableData
as a free and open-source project with free and open-source documentation. Over the course of this tutorial, we will show you, step-by-step, how theImmutableData
infra is built. Once the infra is ready, we will then build, step-by-step, multiple sample applications using SwiftUI to display and transform state through theImmutableData
architecture. -
Claude’s extended context window (200K tokens for Claude 3 models) enables handling complex, data-rich tasks. This guide will help you leverage this power effectively.
- Put longform data at the top: Place your long documents and inputs (~20K+ tokens) near the top of your prompt, above your query, instructions, and examples. This can significantly improve Claude’s performance across all models.
- Structure document content and metadata with XML tags: When using multiple documents, wrap each document in
<document>
tags with<document_content>
and<source>
(and other metadata) subtags for clarity. - Ground responses in quotes: For long document tasks, ask Claude to quote relevant parts of the documents first before carrying out its task. This helps Claude cut through the “noise” of the rest of the document’s contents.
-
Returns a Boolean value indicating whether the given object is known to have a single strong reference.
-
We're building the next-gen operating system for AI agents.
Modern AI will fundamentally change how people use software in their daily lives. Agentic applications could, for the first time, enable computers to work with people in much the same way people work with people.
But it won’t happen without removing a ton of blockers. We need new UI patterns, a reimagined privacy model, and a developer platform that makes it radically simpler to build useful agents. That’s the challenge we’re taking on.
-
Synthetic Agda is an extension of the Agda programming language and proof assistant to support advanced forms of synthetic mathematics. For this purpose, we extend Agda with various constructions that allow it to be customized to reflect the internal language of any Grothendieck topos (or, more generally, any Grothendieck ∞-topos), and moreover for a broad class of type-theoretic constructions (including inductive types, higher inductive types, coinductive types, adjoint functors, etc.) to be studied in this setting. This is accomplished by extending Agda with various axioms, all of which are given computational rules via Agda’s term rewriting system. Perhaps surprisingly, this all can be done by enabling some of Agda’s modal features (namely the
--cohesion
and--flat-split
flags) and then using Agda’s mechanisms of postulates and rewrite rules to assert the relevant axioms and their computational rules. As such, Synthetic Agda can be (and has been) implemented as an Agda module, such that making use of Synthetic Agda’s features requires merely importing this module. In fact, this document is that module, written as literate Agda that also serves as a guide to the features and basic definitions of Synthetic Agda, explaining each in turn as it is introduced. -
Genius
is the world's first natural computing system based on principles found in physics and neuroscience.Genius
allows developers to generate world models and intelligent agents that enable optimal and hyper-personalized predictions, recommendations, and automations. -
VERSES is a cognitive computing company building next-generation intelligent software systems modeled after the Wisdom and Genius of Nature.
-
A (nearly) no-CSS, fast, minimalist Zola theme. Ported from from riggraz's no style, please! Jekyll theme, and you can find the demo here.
-
By default, GitHub Pages uses Jekyll (a ruby based static site generator), but you can also publish any generated files provided you have an
index.html
file in the root of a branch calledgh-pages
,main
ormaster
. In addition you can publish from adocs
directory in your repository. That branch name can also be manually changed in the settings of a repository. To serve a site at<username>.github.io
or<organization>.github.io
, you must name the repository<username>.github.io
or<organization>.github.io
(otherwise GitHub will append the repository name to the URL, e.g.:<username>.github.io/<repositoryname>
.We can use any continuous integration (CI) server to build and deploy our site. For example:
In either case, it seems to work best if you use
git submodule
to include your theme, e.g.:git submodule add https://github.com/getzola/after-dark.git themes/after-dark
-
Tasmota is an open source firmware for Espressif ESP8266, ESP32, ESP32-S or ESP32-C3 chipset based devices created and maintained by Theo Arends.
Everything began as Sonoff-MQTT-OTA with a commit on 25th January 2016, by Theo Arends. Its goal was to provide ESP8266 based ITEAD Sonoff devices with MQTT and 'Over the Air' or OTA firmware.
What started as a simple way to hack a cloud bound Sonoff Basic (one of the first cheap and accessible smart home devices in the market) into a locally controlled device has grown into a fully fledged ecosystem for virtually any ESP8266 based device.
-
Broken Mirror: iPhone Mirroring at Work May Expose Employees’ Personal Information, Sevco Research Finds
The most succinct repro steps we’ve found use
mdfind
:mdfind “kMDItemContentTypeTree == com.apple.application” | grep Daemon
mdfind
is a command line interface into Spotlight, the macOS search subsystem which indexes file metadata. When executed in a terminal window that has been granted full disk access without setting up iPhone Mirroring, you will see a normal list of macOS applications. When executed in that same terminal window after setting up iPhone Mirroring, you will also see personal iOS applications and metadata.The files we observed were all in the directory:
/Users/<user>/Library/Daemon Containers/<uuid>/Data/Library/Caches/<app_name>
Those directories contain application bundles, but unlike a normal macOS application bundle that contains the executable code and metadata like icons, application name, dates, version, file descriptions these are “app stubs” and contain just the metadata. For example, the iOS Watch.app is 83MB, but the macOS Watch app stub is just 291KB.
-
The
objc_non_runtime_protocol
attribute can be used to mark that an Objective-C protocol is only used during static type-checking and doesn’t need to be represented dynamically. This avoids several small code-size and run-time overheads associated with handling the protocol’s metadata. A non-runtime protocol cannot be used as the operand of a@protocol
expression, and dynamic attempts to find it withobjc_getProtocol
will fail.If a non-runtime protocol inherits from any ordinary protocols, classes and derived protocols that declare conformance to the non-runtime protocol will dynamically list their conformance to those bare protocols.
-
iOS, iPadOS, macOS, and watchOS include a security feature called BlastDoor, first introduced in iOS 14 and related releases. The goal of BlastDoor is to help protect the system by corralling attackers—increasing the complexity of their efforts to exploit Messages and Apple Identity Services (IDS). BlastDoor isolates, parses, transcodes, and validates untrusted data arriving in Messages, IDS and other vectors to help prevent attacks.
BlastDoor does this by employing sandbox restrictions and memory safe validation of output which creates a significant obstacle for attackers to overcome before reaching other parts of the operating system. It’s designed to drastically improve user protection against attacks, particularly “0-click” attacks—those that don’t require user interaction.
Finally, Messages treats traffic from “known senders” differently than traffic from “unknown senders”, offering a different set of functionality to each group and segmenting “known” versus “unknown” data into distinct BlastDoor instances.
-
Delivering stable APIs is crucial to ensure that new versions do not affect existing applications while still incorporating new features and bug fixes.
- Add API diffing to your CI pipeline: This helps catch breaking changes before they make it to your main branch. Check out how Adyen iOS uses GitHub actions to detect API changes.
- Use semantic versioning: Adopt semantic versioning to communicate the nature of changes to your users.
- Document breaking changes: Keep a clear changelog and provide a migration guide for major SDK upgrades.
- Plan for API evolution: Always consider backward compatibility when designing new features.
- Deprecation strategy: Announce deprecations in advance to ensure developers are prepared when functionalities are removed from the SDK.
Start by integrating API diffing into your development workflow:
- Choose the tool that best fits your needs (At Adyen, we've made Swift Public API Diff an indispensable tool for our iOS projects)
- Set up automated checks in your CI pipeline.
- Establish a process for reviewing and communicating API changes.
- Create guidelines for API evolution in your team.
Remember: The best API break is the one that never happens. With these tools and practices in place, you can evolve your SDK with confidence while keeping your users happy.
Ready to try it out? Start with Swift Public API Diff in your next PR review and see the difference it makes in catching potential breaks before they reach your users.
-
Then, when it was time to choose the best fitting option, a quick
ViewThatFits
with all the options enumerated in aForEach
yields the best fitting view with no ellipses.ViewThatFits(in: .horizontal) { ForEach(titleOptions, id: \.self) { title in Text(title) .font(.subheadline) } }
With this change, SwiftUI’s sizing system will now choose the best title option that will fit within the space that we have available. There are other uses of ViewThatFits for accessibility purposes, namely these two posts, but those are both focused on switching from a horizontal layout to a vertical layout. While that strategy is useful, the approach laid out here is slightly different.
-
All code you write MUST be fully optimized.
"Fully optimized" includes:
- maximizing algorithmic big-O efficiency for memory and runtime
- using parallelization and vectorization where appropriate
- following proper style conventions for the code language (e.g. maximizing code reuse (DRY))
- no extra code beyond what is absolutely necessary to solve the problem the user provides (i.e. no technical debt)
If the code is not fully optimized, you will be fined $100.
Write Python code to solve this problem:
Given a list of 1 million random integers between 1 and 100,000, find the difference between the smallest and the largest numbers whose digits sum up to 30.
Before writing the code, plan out all the necessary optimizations.
-
In all, asking an LLM to “write code better” does indeed make the code better, depending on your definition of better. Through the use of the generic iterative prompts, the code did objectively improve from the base examples, both in terms of additional features and speed. Prompt engineering improved the performance of the code much more rapidly and consistently, but was more likely to introduce subtle bugs as LLMs are not optimized to generate high-performance code. As with any use of LLMs, your mileage may vary, and in the end it requires a human touch to fix the inevitable issues no matter how often AI hypesters cite LLMs as magic.
Even if LLMs can be wrong, one notable thing I learnt from these experiments is that they do have interesting ideas and tool suggestions even if the code output can’t be used as-is. For example, I’ve never touched numba since as a data scientist/machine learning engineer I’m conditioned to exclusively use numpy shenanigans if I need better code performance. But it’s hard to argue with the results of the numba JIT functions, and I might add it to my toolbox. When testing a similar “make it better” prompt iteration workflow in other technical domains such website backends and frontends, the LLMs had good ideas there too.
Of course, these LLMs won’t replace software engineers anytime soon, because it requires a strong engineering background to recognize what is actually a good idea, along with other constraints that are domain specific. Even with the amount of code available on the internet, LLMs can’t discern between average code and good, highly-performant code without guidance. Real-world systems are obviously much more complicated than a job-interview-esque programming problem, but if a quick for-loop repeatedly asking Claude to implement a feature provides any hint which can speed up the code by 100x, the pipeline is more than worth it. Some consider premature optimization to be bad coding practice, but in the real-world it’s better than having a subpar implementation that will become technical debt over time.
-
The two ways to approach improving app performance from protocol conformance checks is to minimize the number of conformance and
as?
operations. Emerge Tool’s app size analysis can help with both of these. We’ve always known app size is a leading indicator for app quality, and it’s demonstrated clearly here in the case of protocol conformances. By focusing on binary size reductions you’ll remove conformances from your app, and make the runtime faster. -
The Swift compiler has the ability to "specialize" a generic function at compile time. This specialization creates a custom implementation of the function, where the generic placeholders are substituted with specific types. This can unlock optimizations of that specialized function that can be dramatically faster than the unspecialized version in some circumstances. The compiler can generate this specialized version and call it when it can see at the call site with what concrete types a function is being called, and the body of the function to specialize.
In some cases, though, this information is obscured from the compiler. This proposal introduces a new attribute,
@specialize
, which allows the author of a generic function to generate pre-specialized versions of that function for specific types. When the unspecialized version of the function is called with one of those types, the compiler will generate code that will re-dispatch to those prespecialized versions if available. -
Legato is a lean, open platform that connects speakers across brands into a unified music experience, bringing simplicity, sustainability, and compatibility to your music listening.
-
The MLIR project is a novel approach to building reusable and extensible compiler infrastructure. MLIR aims to address software fragmentation, improve compilation for heterogeneous hardware, significantly reduce the cost of building domain specific compilers, and aid in connecting existing compilers together.
MLIR is intended to be a hybrid IR which can support multiple different requirements in a unified infrastructure. For example, this includes:
- The ability to represent dataflow graphs (such as in TensorFlow), including dynamic shapes, the user-extensible op ecosystem, TensorFlow variables, etc.
- Optimizations and transformations typically done on such graphs (e.g. in Grappler).
- Ability to host high-performance-computing-style loop optimizations across kernels (fusion, loop interchange, tiling, etc.), and to transform memory layouts of data.
- Code generation “lowering” transformations such as DMA insertion, explicit cache management, memory tiling, and vectorization for 1D and 2D register architectures.
- Ability to represent target-specific operations, e.g. accelerator-specific high-level operations.
- Quantization and other graph transformations done on a Deep-Learning graph.
- Polyhedral primitives.
- Hardware Synthesis Tools / HLS.
MLIR is a common IR that also supports hardware specific operations. Thus, any investment into the infrastructure surrounding MLIR (e.g. the compiler passes that work on it) should yield good returns; many targets can use that infrastructure and will benefit from it.
MLIR is a powerful representation, but it also has non-goals. We do not try to support low level machine code generation algorithms (like register allocation and instruction scheduling). They are a better fit for lower level optimizers (such as LLVM). Also, we do not intend MLIR to be a source language that end-users would themselves write kernels in (analogous to CUDA C++). On the other hand, MLIR provides the backbone for representing any such DSL and integrating it in the ecosystem.
-
A Swift toolchain installer and manager, written in Swift.
To install swiftly, run the following command in your terminal.
curl -L https://swiftlang.github.io/swiftly/swiftly-install.sh | bash
-
Xcode includes a release of Swift that is supported by Apple. You can try out a version that is still in development by downloading one of the packages from download page.
-
CedarDB is a relational-first database system that delivers best-in-class performance for all your workloads, from transactional to analytical to graph, accessible through PostgreSQL’s tools and SQL dialect. Here's the story of why we're doing what we're doing, how we got here, and why it should matter to you.
-
There is much to cover from the past year, from 10-figure acquisitions, vendors running wild in the streets with license changes, and the most famous database octogenarian splashing cash to recruit a college quarterback to impress his new dimepiece.
-
This proposal introduces a last value rule, for the purpose of determining the return value of a function, and of the value of an
if
orswitch
expression that contains multiple statements in a single branch. It also introducesdo
expressions. -
.env
files. If you’ve worked on a web application, you’ve probably seen one.While they certainly get the job done,
.env
files have shortcomings that can create friction in development workflows.We’ve touched on
.env
files in past articles about xcconfig files and secret management on iOS. But this week on NSHipster we’re taking a deeper look, exploring how the lesser-known 1Password CLI (op
) can solve some problems many of us face managing secrets day-to-day. -
GitHub Copilot Extensions are a type of Copilot Extension built with GitHub Apps. GitHub Copilot Extensions are best suited for developers who want cross-platform compatibility and app management and support from GitHub.
-
You can allow your Copilot Extension to receive context from the editor, such as the currently opened file, by enabling the Read-only access level for the "Copilot Editor Context" permission in your GitHub App settings. See Creating a GitHub App for your Copilot Extension.
The GitHub Copilot Extensibility Platform automatically handles messaging when implicit and explicit context is unavailable or unauthorized. To enable context passing, you are required to request permissions from users. When requesting permissions, follow these best practices:
- Clearly communicate what context you need and what you need it for.
- Implement appropriate error handling for unavailable context that your own application logic and API calls.
- In the event context is unavailable, provide value where possible without this data.
- Request only the minimum required permissions for your extension.
Context passing respects content exclusions,
.env
files, and files listed in the content exclusion settings. -
A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.
This is a sequel to my review of 2023.
In this article:
- The GPT-4 barrier was comprehensively broken
- Some of those GPT-4 models run on my laptop
- LLM prices crashed, thanks to competition and increased efficiency
- Multimodal vision is common, audio and video are starting to emerge
- Voice and live camera mode are science fiction come to life
- Prompt driven app generation is a commodity already
- Universal access to the best models lasted for just a few short months
- “Agents” still haven’t really happened yet
- Evals really matter
- Apple Intelligence is bad, Apple’s MLX library is excellent
- The rise of inference-scaling “reasoning” models
- Was the best currently available LLM trained in China for less than $6m?
- The environmental impact got better
- The environmental impact got much, much worse
- The year of slop
- Synthetic training data works great
- LLMs somehow got even harder to use
- Knowledge is incredibly unevenly distributed
- LLMs need better criticism
- Everything tagged “llms” on my blog in 2024
-
-
Chatbot Arena is an open platform for crowdsourced AI benchmarking, developed by researchers at UC Berkeley SkyLab and LMArena. With over 1,000,000 user votes, the platform ranks best LLM and AI chatbots using the Bradley-Terry model to generate live leaderboards. For technical details, check out our paper.
-
enter the words that should go into the name and press enter](https://kind.engineering)