VoiceOver can be automated on the iOS Simulator, but no single tool or API provides an end-to-end solution. This research investigated five dimensions of the problem: enabling/disabling VoiceOver, sending navigation commands, reading focus state, existing frameworks, and React Native's accessibility APIs. The core finding is that a viable automation stack exists by combining xcrun simctl (VoiceOver lifecycle), AppleScript keystroke injection (navigation), and the macOS AXUIElement API (tree inspection), but reading VoiceOver cursor position remains the hardest unsolved problem. No existing framework (Apple, Google, Deque, or open-source) automates VoiceOver itself. Every tool in the ecosystem validates accessibility metadata, not screen reader behavior. This gap is real and unaddressed.
Recommended architecture: A three-layer system combining (1) defaults write + launchctl for VoiceOver lifecycle control, (2) osascript keyst