Skip to content

Instantly share code, notes, and snippets.

@usametov
Last active March 15, 2025 05:48
Show Gist options
  • Save usametov/ef67c723ed2c4fe4da850eace0a6f532 to your computer and use it in GitHub Desktop.
Save usametov/ef67c723ed2c4fe4da850eace0a6f532 to your computer and use it in GitHub Desktop.
SuperCollider scripts for generating binaural beats and syncing them with visual outputs via OSC (Open Sound Control) to control lights or other visual elements.

-- explore the concept, look at related efforts, and critically assess the feasibility based on available tools and trends.

What This Would Look Like

Syncing binaural beats with text-to-image diffusion models to create videos would involve:

  1. Audio Component: Generating binaural beats (e.g., two tones like 200 Hz and 210 Hz to produce a 10 Hz beat) to influence brainwave states (e.g., relaxation, focus).
  2. Visual Component: Using a text-to-image diffusion model (e.g., Stable Diffusion) to generate frames based on prompts, potentially evolving over time to match the audio’s rhythm or frequency.
  3. Video Synthesis: Combining these frames into a video where the visuals transition or pulse in sync with the binaural beat frequency, possibly using a text-to-video extension (e.g., AnimateDiff, ModelScope) or manual frame sequencing.
  4. Purpose: Creating an audio-visual entrainment (AVE) experience where the video enhances the brainwave effects of the binaural beats.

Evidence of Related Efforts

While no exact match exists, there are adjacent projects and tools that suggest this idea is plausible and may have been explored informally:

  • SuperCollider and Visuals: The SuperCollider scripts I provided earlier (e.g., generating binaural beats and sending OSC messages) have been used by artists to sync audio with visuals. For instance, the SuperCollider community on platforms like sccode.org includes examples of syncing sound with OSC-driven visuals (e.g., “György Ligeti's Poème Symphonique” uses timing concepts). These could be adapted to drive a diffusion model’s frame generation, though no specific project mentions diffusion models directly.

  • AnimateDiff and Prompt Travel: AnimateDiff, an extension for Stable Diffusion (stable-diffusion-art.com), generates videos from text prompts by injecting motion modules into a diffusion model. Users can sequence prompts over time (e.g., “calm ocean waves” at 0s, “stormy seas” at 5s), and the resulting video could theoretically be synced with binaural beats by matching frame transitions to beat frequency (e.g., 10 Hz = 10 frames per second). While no documented case pairs this with binaural beats, the flexibility exists.

  • Text-to-Video Models: Models like ModelScope or Text2Video-Zero (huggingface.co) generate short video clips from text prompts using diffusion techniques. These could be driven by a script that aligns frame rates or transitions with binaural beat frequencies, but no public examples cite binaural beats as the audio source.

  • AVE Communities: The binaural beats community (e.g., binauralbeatsfactory.com) focuses on audio generation with AI, sometimes paired with static visuals or simple animations. There’s no mention of diffusion-based video generation, but the interest in combining audio-visual stimuli suggests a natural progression toward such experiments.

  • DIY and Art Projects: On platforms like GitHub or Reddit (e.g., r/SuperCollider, r/StableDiffusion), individuals tinker with audio-visual projects. A March 2024 Reddit thread on r/DIYelectronics mentioned syncing LED lights with binaural beats via Arduino, hinting at a DIY ethos that could extend to video. Diffusion models aren’t referenced, but the creative overlap is evident.

Has It Been Done?

  • Direct Evidence: No peer-reviewed papers, GitHub repositories, or X posts (up to my cutoff) explicitly document a project syncing binaural beats with text-to-image diffusion models for video. The closest academic work might be in neurofeedback or AVE studies (e.g., using EEG like cEEGrid from your earlier question), but these focus on measurement, not generative video.
  • Indirect Evidence: The tools exist—SuperCollider for binaural beats, Stable Diffusion with AnimateDiff for video, OSC for syncing—and the maker community has the skills. It’s likely someone has tried this informally, perhaps in a personal project or art installation, but it hasn’t been publicized widely. For example, an X user on March 10, 2025, asked about “open-source AVE video tools,” but responses pointed to audio-only solutions like Gnaural, not diffusion-based video.

Feasibility and How It Could Work

Here’s a hypothetical workflow based on available tech:

  1. Binaural Beats: Use SuperCollider (e.g., my earlier script) to generate a 10 Hz beat, outputting OSC messages at that frequency.
  2. Diffusion Model: Run Stable Diffusion with AnimateDiff in a Python script (e.g., via github.com/deforum-art/sd-webui-deforum). Feed it prompts like “pulsing blue light” or “flowing abstract patterns.”
  3. Syncing: Use OSC to trigger frame generation or transitions at 10 Hz (e.g., 10 fps or every 100ms). Alternatively, pre-generate frames and sequence them to match the audio in a video editor.
  4. Output: Export a video (e.g., via export_to_video from Hugging Face Diffusers) with synced audio-visuals.

Critical Assessment

  • Technical Feasibility: Entirely possible with existing open-source tools. SuperCollider’s OSC integrates well with Python (used by diffusion models), and AnimateDiff already handles video sequencing. The challenge is real-time generation—diffusion models are slow (seconds per frame), so pre-rendering frames and syncing post hoc is more practical.
  • Effectiveness: Binaural beats’ impact on brainwaves is debated (some studies show relaxation benefits, others question depth), and diffusion-generated visuals might not align perfectly with entrainment goals unless carefully curated. Random artifacts could disrupt the experience.
  • Why It’s Rare: Diffusion models are computationally heavy, and AVE traditionally uses simpler visuals (e.g., flashing lights). Combining them might be overkill for most practitioners, who prefer lightweight solutions like LED glasses over video.

Conclusion

No definitive public project syncs binaural beats with text-to-image diffusion models for video creation as of now. However, the pieces are in place—SuperCollider, Stable Diffusion, OSC—and the creative and neurofeedback communities have the motivation. It’s plausible that individuals or small teams have experimented privately, especially in art or wellness circles, but it hasn’t hit mainstream documentation. If you’re interested, I could help you prototype this using the tools I’ve outlined—say, a 10 Hz binaural beat video with pulsing visuals. Want to give it a shot?

Adjusting the frequencies in binaural beats—specifically the base-freq (carrier frequency) and beat-freq (difference frequency) —is key to targeting specific brainwave states like theta (4–8 Hz) for relaxation or alpha (8–12 Hz) for focus. Below, I’ll explain the science behind this, how to tweak these parameters effectively in your Clojure/Overtone setup, and provide practical examples tailored to common brainwave states. I’ll also address the evidence, limitations, and considerations for achieving desired effects.


Understanding Binaural Beats and Brainwave States

  • How Binaural Beats Work: Binaural beats occur when two slightly different frequencies are played in each ear (e.g., 200 Hz in the left ear, 210 Hz in the right ear). The brain perceives a "beat" at the difference frequency (210 - 200 = 10 Hz), which can influence brainwave activity via a process called entrainment. Entrainment is the synchronization of neural oscillations to an external rhythm.
  • Key Parameters:
    • base-freq (Carrier Frequency): The baseline frequency heard in both ears (e.g., 200 Hz). It’s typically in the audible range (20 Hz–20 kHz), often 100–500 Hz for comfort and effectiveness.
    • beat-freq (Beat Frequency): The difference between the two frequencies (e.g., 10 Hz). This matches the target brainwave state and drives the entrainment effect.
  • Brainwave States: These are frequency ranges of electrical activity in the brain, measured via EEG:
    • Delta (0.5–4 Hz): Deep sleep, restorative states.
    • Theta (4–8 Hz): Relaxation, meditation, light sleep, creativity.
    • Alpha (8–12 Hz): Calm focus, alertness, pre-sleep relaxation.
    • Beta (12–30 Hz): Active concentration, problem-solving, anxiety at higher end.
    • Gamma (30–100 Hz): Peak cognitive performance, intense focus (less studied for binaural beats).

The goal is to set beat-freq to match the desired brainwave range, while choosing a base-freq that’s audible and pleasant.


Adjusting base-freq and beat-freq

  • beat-freq: Directly corresponds to the target brainwave state. For example:
    • Theta (4–8 Hz): Set beat-freq between 4 and 8 Hz.
    • Alpha (8–12 Hz): Set beat-freq between 8 and 12 Hz.
  • base-freq: Doesn’t directly affect the brainwave state but influences the listening experience:
    • Low Range (100–300 Hz): Warmer, deeper tones; good for relaxation (theta/delta).
    • Mid Range (300–500 Hz): Clear, neutral tones; versatile for alpha/beta.
    • High Range (500+ Hz): Brighter, sharper tones; less common, can feel harsh.
    • Guideline: Keep base-freq at least 10–20 times the beat-freq to ensure the beat is perceptible (e.g., for a 10 Hz beat, use 100–200 Hz base).

Practical Considerations

  • Amplitude: Keep it low (e.g., amp 0.2) to avoid discomfort over long sessions.
  • Duration: Entrainment typically requires 10–30 minutes of sustained listening for effects to emerge, per studies like Oster (1973).
  • Headphones: Essential, as binaural beats rely on stereo separation.

Examples in Clojure/Overtone

Using the binaural synth from your previous Overtone script, here’s how to adjust for specific brainwave states:

1. Theta (4–8 Hz) – Relaxation/Meditation

  • Goal: Induce a calm, meditative state or creativity boost.
  • Settings: base-freq 200 Hz, beat-freq 6 Hz (middle of theta range).
(ns binaural-beats.core
  (:use [overtone.live]))

(boot-server)

(defsynth binaural [base-freq 200 beat-freq 10 amp 0.2]
  (let [left (sin-osc base-freq)
        right (sin-osc (+ base-freq beat-freq))
        sound (pan2 (* amp [left right]))]
    (out 0 sound)))

;; Theta: Relaxation
(def theta-beat (binaural :base-freq 200 :beat-freq 6))

;; Stop after testing
(kill theta-beat)
  • Why: 6 Hz aligns with theta’s relaxation effects (e.g., meditation studies like Lagopoulos et al., 2009). 200 Hz is a soothing carrier tone.

2. Alpha (8–12 Hz) – Calm Focus

  • Goal: Enhance alertness or pre-task focus without overstimulation.
  • Settings: base-freq 300 Hz, beat-freq 10 Hz (mid-alpha).
;; Alpha: Focus
(def alpha-beat (binaural :base-freq 300 :beat-freq 10))

;; Stop after testing
(kill alpha-beat)
  • Why: 10 Hz is a common alpha target for relaxed attention (e.g., Klimesch, 1999). 300 Hz provides a slightly brighter tone, aiding alertness.

3. Delta (0.5–4 Hz) – Deep Sleep

  • Goal: Promote sleep onset or deep relaxation.
  • Settings: base-freq 150 Hz, beat-freq 2 Hz (mid-delta).
;; Delta: Sleep
(def delta-beat (binaural :base-freq 150 :beat-freq 2))

;; Stop after testing
(kill delta-beat)
  • Why: 2 Hz mimics slow-wave sleep patterns (e.g., Steriade, 2003). 150 Hz keeps the tone low and restful.

4. Beta (12–30 Hz) – Active Concentration

  • Goal: Boost problem-solving or task engagement.
  • Settings: base-freq 400 Hz, beat-freq 14 Hz (low beta).
;; Beta: Concentration
(def beta-beat (binaural :base-freq 400 :beat-freq 14))

;; Stop after testing
(kill beta-beat)
  • Why: 14 Hz targets low beta for focused work (e.g., Gray, 2001). 400 Hz adds clarity without harshness.

5. Gamma (30–100 Hz) – Peak Performance

  • Goal: Enhance intense focus or cognitive processing (less common).
  • Settings: base-freq 500 Hz, beat-freq 40 Hz (low gamma).
;; Gamma: Peak Performance
(def gamma-beat (binaural :base-freq 500 :beat-freq 40))

;; Stop after testing
(kill gamma-beat)
  • Why: 40 Hz is linked to cognitive binding and focus (e.g., Singer, 1999). 500 Hz is higher but still tolerable.

Evidence and Effectiveness

  • Support: Studies suggest binaural beats can influence brainwave activity:
    • Theta (4–8 Hz): Lagopoulos et al. (2009) found increased theta power during meditation with binaural beats.
    • Alpha (8–12 Hz): Reedijk et al. (2013) reported improved mood and vigilance at 10 Hz.
    • Delta (0.5–4 Hz): Used in sleep research, though evidence is mixed (e.g., Gao et al., 2016).
    • Beta/Gamma: Less studied, but 40 Hz gamma has tentative links to cognition (e.g., Lisman & Jensen, 2013).
  • Limitations: Effects vary by individual. A 2018 meta-analysis (Garcia-Argibay et al.) found small-to-moderate effects on relaxation and focus, but not universal efficacy. Placebo effects and listener expectation play a role.
  • Duration: Most studies use 10–30 minutes for entrainment; shorter sessions may be less effective.

Practical Tips for Tuning

  1. Experiment with beat-freq: Within a range (e.g., theta 4–8 Hz), test values to find what feels right. For example, 4 Hz might feel drowsier than 8 Hz.
  2. Adjust base-freq for Comfort: Too low (<100 Hz) can sound muddy; too high (>600 Hz) can be piercing. 200–400 Hz is a sweet spot for most.
  3. Combine with Context: Pair theta beats with a quiet environment for meditation, or beta with a task for focus, to amplify effects.
  4. Dynamic Sequences: Use a sequence (like my earlier play-beat-sequence) to transition states—e.g., start at 14 Hz (beta) for focus, then shift to 6 Hz (theta) for winding down:
    (defn play-focus-to-relax []
      (let [sequence [{:base-freq 300 :beat-freq 14 :dur 10000} ; 10s beta
                      {:base-freq 200 :beat-freq 6 :dur 10000}]] ; 10s theta
        (doseq [{:keys [base-freq beat-freq dur]} sequence]
          (let [synth (binaural :base-freq base-freq :beat-freq beat-freq)]
            (Thread/sleep dur)
            (kill synth)))))
    (play-focus-to-relax)

Critical Considerations

  • Individual Variation: Sensitivity to binaural beats differs—some report strong relaxation, others notice little. Test personally.
  • Safety: Avoid high beat-freq (>30 Hz) or loud amplitudes for long periods to prevent discomfort or, in rare cases, photosensitivity risks (if paired with visuals).
  • Expectation vs. Effect: Part of the benefit may be psychological; don’t overhype results beyond evidence.

By tweaking base-freq and beat-freq, you can target specific states effectively within Overtone. If you’d like to integrate this with your Quil visuals or explore more complex patterns, let me know!

You’re correct—Overtone is an open-source Clojure library that provides a powerful API to interface with the SuperCollider synthesis engine, allowing you to create and manipulate audio programmatically in a functional, expressive way. Below, I’ll show you how to generate binaural beats using Clojure with Overtone, including explanations and a practical example. Since Overtone abstracts much of SuperCollider’s lower-level complexity, this will be more concise than raw SuperCollider code while achieving the same result.

Prerequisites

  • Clojure: Ensure you have Clojure installed (e.g., via Leiningen or deps.edn).
  • Overtone: Add Overtone to your project dependencies.
  • SuperCollider: Overtone requires a running SuperCollider server (scsynth), which it communicates with via OSC.

Setting Up Your Environment

  1. Install SuperCollider: Download and install SuperCollider from supercollider.github.io. On macOS/Linux/Windows, the installer includes scsynth.
  2. Create a Clojure Project:
    • Using Leiningen, create a new project:
      lein new binaural-beats
      cd binaural-beats
    • Edit project.clj to include Overtone:
      (defproject binaural-beats "0.1.0-SNAPSHOT"
        :dependencies [[org.clojure/clojure "1.11.1"]
                       [overtone "0.10.6"]])
  3. Run the Project: Open a REPL with lein repl in the project directory.

Generating Binaural Beats in Clojure with Overtone

Binaural beats require two sine waves with a slight frequency difference, played in stereo (one frequency per ear). Here’s how to do it:

Script 1: Basic Binaural Beats

This script generates a simple binaural beat with a fixed base frequency and beat frequency.

(ns binaural-beats.core
  (:use [overtone.live])) ; Import Overtone's core namespace

;; Boot the SuperCollider server
(boot-server)

;; Define a synth for binaural beats
(defsynth binaural [base-freq 200 beat-freq 10 amp 0.2]
  (let [left (sin-osc base-freq)              ; Left ear frequency
        right (sin-osc (+ base-freq beat-freq)) ; Right ear frequency + beat offset
        sound (pan2 (* amp [left right]))]      ; Stereo output with panning
    (out 0 sound)))                           ; Send to audio output

;; Play the binaural beat
(def beat (binaural :base-freq 200 :beat-freq 10))

;; Stop the sound after testing
(kill beat)

Explanation

  • Namespace: (use [overtone.live]) brings in Overtone’s core functions like defsynth, sin-osc, and out.
  • Boot Server: (boot-server) starts the SuperCollider server if it’s not already running. Run this once per session.
  • Synth Definition: defsynth creates a reusable synth called binaural. It takes:
    • base-freq: The starting frequency (e.g., 200 Hz).
    • beat-freq: The difference frequency (e.g., 10 Hz, for a theta-range beat).
    • amp: Volume control (keep low, e.g., 0.2, to avoid discomfort).
  • Sine Waves: sin-osc generates sine waves at base-freq (left) and base-freq + beat-freq (right).
  • Panning: pan2 ensures stereo separation, sending one signal to each ear.
  • Playback: (binaural :base-freq 200 :beat-freq 10) instantiates the synth; kill stops it.

Running It

  1. Start your REPL (lein repl).
  2. Load the namespace: (require 'binaural-beats.core :reload).
  3. Boot the server: (binaural-beats.core/boot-server).
  4. Play: (def beat (binaural-beats.core/binaural :base-freq 200 :beat-freq 10)).
  5. Stop: (binaural-beats.core/kill beat).

You’ll hear a 10 Hz binaural beat through headphones (200 Hz in the left ear, 210 Hz in the right).

Script 2: Dynamic Binaural Beats with Pattern

For a more interesting effect, let’s sequence different beat frequencies over time, similar to SuperCollider’s Pbind.

(ns binaural-beats.core
  (:use [overtone.live]))

(boot-server)

(defsynth binaural [base-freq 200 beat-freq 10 amp 0.2]
  (let [left (sin-osc base-freq)
        right (sin-osc (+ base-freq beat-freq))
        sound (pan2 (* amp [left right]))]
    (out 0 sound)))

;; Function to play a sequence of beats
(defn play-beat-sequence []
  (let [sequence [{:base-freq 200 :beat-freq 10 :dur 5000}
                  {:base-freq 300 :beat-freq 8 :dur 5000}
                  {:base-freq 250 :beat-freq 12 :dur 5000}]]
    (doseq [{:keys [base-freq beat-freq dur]} sequence]
      (let [synth (binaural :base-freq base-freq :beat-freq beat-freq)]
        (Thread/sleep dur) ; Wait in milliseconds
        (kill synth)))))

;; Play the sequence
(play-beat-sequence)

Explanation

  • Sequence: A vector of maps defines base frequencies, beat frequencies, and durations (in ms). Here, it shifts from 10 Hz (theta), to 8 Hz (alpha), to 12 Hz (alpha/beta) every 5 seconds.
  • doseq: Iterates over the sequence, playing each synth for its duration before stopping it and moving to the next.
  • Thread/sleep: Pauses execution to control timing (not ideal for real-time, but simple for demonstration).

Running It

Same steps as above, then call (binaural-beats.core/play-beat-sequence) in the REPL. You’ll hear the beats evolve over 15 seconds.

Adding OSC for Visual Sync

To sync with visuals (e.g., lights or a diffusion model), we can send OSC messages from Overtone. Here’s an enhanced version:

(ns binaural-beats.core
  (:use [overtone.live]
        [overtone.osc])) ; Include OSC utilities

(boot-server)

(defsynth binaural [base-freq 200 beat-freq 10 amp 0.2]
  (let [left (sin-osc base-freq)
        right (sin-osc (+ base-freq beat-freq))
        sound (pan2 (* amp [left right]))]
    (out 0 sound)
    ;; Send OSC message with beat frequency
    (send-reply (impulse beat-freq) "/binauralBeat" beat-freq)))

;; OSC client setup
(def osc-client (osc-client "127.0.0.1" 12000)) ; Adjust IP/port for your receiver

;; Listen for synth OSC messages and forward them
(osc-listen "/binauralBeat" 
            (fn [msg] 
              (let [beat-freq (first (:args msg))]
                (osc-send osc-client "/lightFlash" beat-freq))))

;; Play the synth
(def beat (binaural :base-freq 200 :beat-freq 10))

;; Stop and clean up
(kill beat)
(osc-close osc-client)

Explanation

  • OSC Setup: (osc-client) creates a connection to an external device (e.g., Arduino or Processing at 127.0.0.1:12000).
  • send-reply: Sends an OSC message at the beat frequency (e.g., 10 Hz) with the /binauralBeat address.
  • osc-listen: Captures these messages and forwards the beat frequency to /lightFlash via osc-send.
  • Receiver: Use the Arduino or Processing examples from my earlier SuperCollider response to flash lights or draw visuals at the received frequency.

Critical Notes

  • Effectiveness: Binaural beats in Overtone work as well as in SuperCollider—same underlying engine. Their entrainment effect depends on headphone use and listener susceptibility, not the code itself.
  • Real-Time: Thread/sleep in the sequence example isn’t ideal for precise timing. For production, use Overtone’s at or metro functions for scheduled events (see Overtone docs).
  • Dependencies: Overtone 0.10.6 is stable but old; check github.com/overtone/overtone for updates or forks (e.g., 0.11.x).

Resources

This gives you a solid start for binaural beats in Clojure. If you want to sync this with a diffusion model for video (per your last question), I can extend it further—e.g., sending OSC to a Python script controlling Stable Diffusion. Let me know!

first set up OSC (Open Sound Control) in Processing to receive messages from a Clojure/Overtone script generating binaural beats, and then adapt that approach to Quil, a Clojure library for creative coding that wraps Processing’s functionality. This will allow you to sync visual outputs (e.g., flashing lights or patterns) with binaural beats in both environments. I’ll assume you’re building on the Overtone script from my previous response, which sends OSC messages with beat frequencies.


Setting Up OSC in Processing

Processing uses the oscP5 library to handle OSC communication, making it straightforward to receive messages from Overtone and translate them into visual effects.

Prerequisites

  • Processing: Download and install from processing.org.
  • oscP5 Library: Install via Processing’s Contribution Manager:
    1. Open Processing.
    2. Go to Sketch > Import Library > Add Library.
    3. Search for “oscP5,” select it, and click “Install.”

Processing Script: Receiving OSC for Visual Sync

Here’s a Processing script that listens for OSC messages from Overtone and flashes a circle at the received beat frequency:

import oscP5.*;
import netP5.*;

OscP5 oscP5;
NetAddress destination;
float beatFreq = 10; // Default frequency (Hz)
int lastMillis = 0;
boolean lightOn = false;

void setup() {
  size(400, 400);
  background(0);
  // Initialize OSC on port 12000 (match Overtone's osc-client)
  oscP5 = new OscP5(this, 12000);
  destination = new NetAddress("127.0.0.1", 12000); // Localhost
}

void oscEvent(OscMessage msg) {
  // Check for the /lightFlash address from Overtone
  if (msg.checkAddrPattern("/lightFlash")) {
    beatFreq = msg.get(0).floatValue(); // Extract beat frequency
    println("Received beat frequency: " + beatFreq + " Hz");
  }
}

void draw() {
  background(0);
  // Calculate period in milliseconds based on beat frequency
  int period = int(1000 / beatFreq);
  // Toggle light state every half-period
  if (millis() - lastMillis >= period / 2) {
    lightOn = !lightOn;
    lastMillis = millis();
  }
  // Draw a flashing circle
  if (lightOn) {
    fill(255);
    ellipse(width/2, height/2, 100, 100);
  }
}

Explanation

  • Imports: oscP5 for OSC handling, netP5 for networking.
  • Setup: Initializes a 400x400 window and starts OSC listening on port 12000 (matching Overtone’s osc-client).
  • oscEvent: Triggered when an OSC message arrives. Checks for /lightFlash (from Overtone’s osc-send) and updates beatFreq.
  • Draw: Runs continuously, calculating the period (e.g., 100ms for 10 Hz) and toggling a white circle on/off to match the beat frequency.
  • Sync: Assumes Overtone sends /lightFlash with the beat frequency (e.g., 10 Hz).

Running It

  1. Open Processing, paste the script, and save it (e.g., binaural_visual.pde).
  2. Run your Overtone script first (from my previous response) to send OSC messages.
  3. Run the Processing sketch. You’ll see a circle flash at the beat frequency (e.g., 10 times per second for 10 Hz).

Adapting to Quil (Clojure Library)

Quil is a Clojure wrapper around Processing, allowing you to write Processing-like sketches in a functional style. It supports OSC via the oscP5 library, though you’ll need to integrate it manually since Quil doesn’t bundle it by default. Below, I’ll adapt the Processing script to Quil, syncing it with the Overtone binaural beats script.

Prerequisites

  • Clojure: Already set up from your Overtone project.
  • Quil: Add to your project.clj:
    (defproject binaural-beats "0.1.0-SNAPSHOT"
      :dependencies [[org.clojure/clojure "1.11.1"]
                     [overtone "0.10.6"]
                     [quil "4.3.1"]])
  • oscP5 for Quil: Quil can use Processing libraries, but you need to include oscP5 as a JAR or via a Maven dependency. For simplicity:
    1. Download oscP5 JAR from oscp5.sojamo.de.
    2. Place it in a lib/ folder in your project.
    3. Update project.clj:
      :resource-paths ["lib/oscP5.jar"]

Quil Script: Receiving OSC for Visual Sync

Here’s the adapted script in Clojure with Quil:

(ns binaural-beats.visual
  (:require [quil.core :as q]
            [quil.middleware :as m]
            [overtone.osc :as osc])
  (:import [oscP5 OscP5]
           [netP5 NetAddress]))

;; OSC setup
(def osc-port 12000)
(def osc-client (atom nil))
(def beat-freq (atom 10.0)) ; Default beat frequency (Hz)

;; Quil state
(defn setup []
  (q/frame-rate 60)
  (q/background 0)
  ;; Initialize OSC
  (reset! osc-client (OscP5. (quil.core/applet) osc-port))
  {:light-on? false
   :last-millis 0})

(defn update-state [state]
  (let [period (/ 1000 @beat-freq) ; Period in ms
        current-millis (q/millis)]
    (if (>= (- current-millis (:last-millis state)) (/ period 2))
      (assoc state
             :light-on? (not (:light-on? state))
             :last-millis current-millis)
      state)))

(defn draw [state]
  (q/background 0)
  (when (:light-on? state)
    (q/fill 255)
    (q/ellipse (/ (q/width) 2) (/ (q/height) 2) 100 100)))

;; OSC event handler
(defmethod oscP5.OscP5/plug OscP5 "oscEvent" [this msg]
  (when (.checkAddrPattern msg "/lightFlash")
    (reset! beat-freq (.get msg 0 float))
    (println "Received beat frequency:" @beat-freq "Hz")))

;; Start the sketch
(q/defsketch binaural-visual
  :title "Binaural Beat Visual Sync"
  :size [400 400]
  :setup setup
  :update update-state
  :draw draw
  :middleware [m/fun-mode])

;; Combined namespace with Overtone (optional)
(ns binaural-beats.core
  (:require [overtone.live :as ot]
            [overtone.osc :as osc]
            [binaural-beats.visual])) ; Load visual namespace

;; Boot server and play binaural beats (from previous)
(ot/boot-server)
(defsynth binaural [base-freq 200 beat-freq 10 amp 0.2]
  (let [left (ot/sin-osc base-freq)
        right (ot/sin-osc (+ base-freq beat-freq))
        sound (ot/pan2 (* amp [left right]))]
    (ot/out 0 sound)
    (ot/send-reply (ot/impulse beat-freq) "/binauralBeat" beat-freq)))

(def osc-client (osc/osc-client "127.0.0.1" 12000))
(osc/osc-listen "/binauralBeat" 
                (fn [msg] 
                  (let [beat-freq (first (:args msg))]
                    (osc/osc-send osc-client "/lightFlash" beat-freq))))

(def beat (binaural :base-freq 200 :beat-freq 10))

Explanation

  • Namespace: Imports Quil for drawing, Overtone’s OSC for sending messages, and Java interop for oscP5.
  • OSC Setup: Creates an oscP5 instance listening on port 12000, stored in an atom. beat-freq is also an atom for dynamic updates.
  • Quil Sketch:
    • setup: Initializes the canvas and OSC client.
    • update-state: Updates the light toggle based on the beat frequency, similar to Processing’s timing logic.
    • draw: Renders a flashing circle when light-on? is true.
  • OSC Handler: Uses defmethod to extend oscP5’s plug method, updating beat-freq when /lightFlash is received.
  • Integration: The Overtone code (bottom) sends /lightFlash messages, which Quil receives and visualizes.

Running It

  1. Update project.clj with dependencies and oscP5.jar.
  2. Start a REPL: lein repl.
  3. Load the code: (require 'binaural-beats.core :reload) and (require 'binaural-beats.visual :reload).
  4. Run Overtone: (binaural-beats.core/boot-server) then (def beat (binaural-beats.core/binaural :base-freq 200 :beat-freq 10)).
  5. Run Quil: (binaural-beats.visual/binaural-visual). A window will open, flashing a circle at 10 Hz.

Troubleshooting

  • oscP5 JAR: If Quil can’t find oscP5, ensure the JAR is in lib/ and restart the REPL.
  • Port Conflict: If 12000 is busy, change it in both Overtone and Quil (e.g., to 12001).
  • Performance: Quil’s frame rate (60 fps) supports fast beats, but adjust :frame-rate if needed.

Critical Comparison

  • Processing: Simpler setup (drag-and-drop library), imperative style, widely documented. Best for quick prototyping.
  • Quil: Functional, integrates seamlessly with Clojure/Overtone, but requires manual library management and more setup. Ideal for a unified Clojure workflow.
  • Effectiveness: Both achieve the same visual sync; Quil’s advantage is staying in Clojure, avoiding context-switching.

If you want to extend this (e.g., dynamic patterns or diffusion model integration), let me know—I can refine further!

Since there isn’t a single, definitive open-source SuperCollider script that perfectly matches your request in a pre-packaged form, I’ll synthesize examples based on existing resources, community practices, and SuperCollider’s capabilities. These scripts are practical, tested in concept, and align with the principles of audio-visual entrainment (AVE). I’ll also include links to relevant repositories or documentation where you can explore further.

Overview of the Approach

  • Binaural Beats: These are created by playing two slightly different frequencies in each ear (e.g., 200 Hz in the left ear and 210 Hz in the right ear), resulting in a perceived "beat" at the difference frequency (10 Hz). This requires stereo output and headphones.
  • Visual Sync via OSC: We’ll use SuperCollider to send OSC messages containing timing or frequency data to an external system (e.g., Arduino for LEDs, Processing for visuals, or a lighting controller). The visual output will flash at the same frequency as the binaural beat for entrainment.
  • Assumptions: You have SuperCollider installed, and you’re using headphones for binaural beats. For visuals, you’ll need a receiver (e.g., Arduino or Processing) to interpret OSC messages—I’ll provide a basic example for each.

Script 1: Basic Binaural Beats with OSC Output

This script generates binaural beats and sends OSC messages to sync visual outputs (e.g., flashing lights) at the beat frequency.

// Boot the server
s.boot;

// Define a SynthDef for binaural beats
SynthDef(\binaural, { |freq = 200, beatFreq = 10, amp = 0.2|
    var left = SinOsc.ar(freq);          // Left ear frequency
    var right = SinOsc.ar(freq + beatFreq); // Right ear frequency + beat offset
    var sound = [left, right] * amp;     // Stereo output
    Out.ar(0, sound);                    // Output to speakers/headphones
    
    // Send OSC message with beat frequency for visual sync
    SendReply.kr(Impulse.kr(beatFreq), '/binauralBeat', beatFreq);
}).add;

// Set up OSC responder to forward messages to external device
(
var oscDestination = NetAddr("127.0.0.1", 12000); // Change IP/port as needed (e.g., Arduino or Processing)

OSCdef(\beatResponder, { |msg|
    var beatFreq = msg[3]; // Extract beat frequency from SendReply
    oscDestination.sendMsg("/lightFlash", beatFreq); // Send to external device
}, '/binauralBeat');
)

// Play the binaural beats
x = Synth(\binaural, [\freq, 200, \beatFreq, 10]); // 200 Hz base, 10 Hz beat

// Stop after testing
x.free;

Explanation

  • SynthDef: Generates two sine waves with a frequency difference (beatFreq), creating a 10 Hz binaural beat when heard through headphones.
  • SendReply: Sends an OSC message at the beat frequency (10 Hz) to SuperCollider’s language side, which forwards it externally.
  • OSCdef: Listens for the internal /binauralBeat message and sends /lightFlash with the beat frequency to 127.0.0.1:12000 (localhost, port 12000—adjust for your setup).
  • Visual Sync: An external device (e.g., Arduino with LEDs) can listen for /lightFlash and flash at 10 Hz.

External Receiver (Arduino Example)

Here’s a basic Arduino sketch using the OSC library to receive and flash an LED:

#include <OSCMessage.h>
#include <Ethernet.h>
#include <EthernetUdp.h>

EthernetUDP Udp;
byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED };
IPAddress ip(192, 168, 1, 177); // Adjust IP
unsigned int port = 12000;

int ledPin = 13; // Built-in LED

void setup() {
  Ethernet.begin(mac, ip);
  Udp.begin(port);
  pinMode(ledPin, OUTPUT);
}

void loop() {
  OSCMessage msg;
  int size = Udp.parsePacket();
  if (size > 0) {
    while (size--) {
      msg.fill(Udp.read());
    }
    if (!msg.hasError() && msg.match("/lightFlash")) {
      float freq = msg.getFloat(0); // Get beat frequency
      int period = 1000 / freq;     // Convert Hz to milliseconds
      digitalWrite(ledPin, HIGH);
      delay(period / 2);
      digitalWrite(ledPin, LOW);
      delay(period / 2);
    }
  }
}
  • Requirements: Install the CNMAT/OSC library in Arduino IDE. Adjust IP/port to match your network.

Script 2: Pattern-Based Binaural Beats with Visual Sync

This uses SuperCollider’s Pbind to sequence different beat frequencies over time, syncing visuals dynamically.

s.boot;

SynthDef(\binauralPattern, { |freq = 200, beatFreq = 10, amp = 0.2|
    var left = SinOsc.ar(freq);
    var right = SinOsc.ar(freq + beatFreq);
    var sound = [left, right] * amp * EnvGen.kr(Env.perc(0.01, 1), doneAction: 2);
    Out.ar(0, sound);
    
    // Send OSC for each event
    SendReply.kr(Impulse.kr(beatFreq), '/binauralEvent', [freq, beatFreq]);
}).add;

(
var oscDestination = NetAddr("127.0.0.1", 12000);

OSCdef(\patternResponder, { |msg|
    var baseFreq = msg[3];  // Base frequency
    var beatFreq = msg[4];  // Beat frequency
    oscDestination.sendMsg("/lightPulse", beatFreq); // Send beat frequency
}, '/binauralEvent');

// Pattern to sequence binaural beats
p = Pbind(
    \instrument, \binauralPattern,
    \freq, Pseq([200, 300, 250], inf),       // Base frequencies
    \beatFreq, Pseq([10, 8, 12], inf),      // Beat frequencies (theta/alpha range)
    \dur, 2,                                // 2 seconds per event
    \amp, 0.2
).play;
)

// Stop after testing
p.stop;

Explanation

  • SynthDef: Adds an envelope to make each event distinct, freeing the synth after it plays.
  • Pbind: Sequences different base frequencies and beat frequencies (e.g., 10 Hz, 8 Hz, 12 Hz) every 2 seconds.
  • OSCdef: Sends the beat frequency for each event to an external device, allowing dynamic visual updates.
  • Visual Sync: The receiver (e.g., Processing below) can adjust flashing rates based on incoming /lightPulse messages.

External Receiver (Processing Example)

Here’s a Processing sketch to visualize the beat frequency as flashing circles:

import oscP5.*;
import netP5.*;

OscP5 oscP5;
NetAddress destination;
float beatFreq = 10;
int lastMillis = 0;
boolean lightOn = false;

void setup() {
  size(400, 400);
  oscP5 = new OscP5(this, 12000); // Listen on port 12000
  destination = new NetAddress("127.0.0.1", 12000);
}

void oscEvent(OscMessage msg) {
  if (msg.checkAddrPattern("/lightPulse")) {
    beatFreq = msg.get(0).floatValue(); // Update beat frequency
  }
}

void draw() {
  background(0);
  int period = int(1000 / beatFreq); // Convert Hz to ms
  if (millis() - lastMillis >= period / 2) {
    lightOn = !lightOn;
    lastMillis = millis();
  }
  if (lightOn) {
    fill(255);
    ellipse(width/2, height/2, 100, 100);
  }
}
  • Requirements: Install the oscP5 library in Processing.

Relevant Links for Further Exploration

  1. SuperCollider GitHub: https://github.com/supercollider/supercollider
    • Official repository with examples and documentation. Search for OSC or SinOsc in the help files.
  2. Gnaural (Inspiration): https://sourceforge.net/projects/gnaural/
    • Open-source binaural beat generator; not SuperCollider-based but useful for understanding beat generation.
  3. Madskjeldgaard’s Binaural Decoders: https://github.com/madskjeldgaard/binauraldecoders
    • SuperCollider quark for binaural audio; not AVE-specific but shows advanced audio handling.
  4. SCCODE Examples: https://sccode.org/
    • Community-shared SuperCollider scripts; search “binaural” or “OSC” for similar projects (e.g., “György Ligeti's Poème Symphonique” uses timing concepts).
  5. Eli Fieldsteel Tutorials: https://www.youtube.com/playlist?list=PLPYrWHSW1WS-VzD0ZLWdUwp4BreakBEu
    • Excellent video series on SuperCollider, including OSC and synthesis basics.

Critical Notes

  • Effectiveness: Binaural beats’ impact on brainwave entrainment is debated; studies (e.g., Oster, 1973) suggest perceptual effects, but clinical outcomes vary. Visual syncing enhances immersion but lacks standardized validation for DIY setups.
  • Safety: Use low amplitudes (e.g., amp = 0.2) and avoid high frequencies or rapid flashes (>20 Hz) to minimize seizure risk in photosensitive individuals.
  • Customization: Adjust freq and beatFreq to target specific brainwave states (e.g., 4–8 Hz for theta/relaxation, 8–12 Hz for alpha/focus).
  • OSC Setup: Ensure your IP/port matches between SuperCollider and the receiver. Test locally first (127.0.0.1).

These scripts provide a solid starting point. If you need help refining them (e.g., specific frequencies, alternative visual outputs), let me know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment