Created
October 30, 2020 21:12
-
-
Save blacklight/b4b29e5044f5a6a609e62fa212b736a3 to your computer and use it in GitHub Desktop.
Use a previously trained Tensorflow micmon sound model to make predictions on sound segments from a microphone
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import os | |
from micmon.audio import AudioDevice | |
from micmon.model import Model | |
model_dir = os.path.expanduser('~/models/sound-detect') | |
model = Model.load(model_dir) | |
audio_system = 'alsa' # Supported: alsa and pulse | |
audio_device = 'plughw:2,0' # Get list of recognized input devices with arecord -l | |
with AudioDevice(audio_system, device=audio_device) as source: | |
for sample in source: | |
source.pause() # Pause recording while we process the frame | |
prediction = model.predict(sample) | |
print(prediction) | |
source.resume() # Resume recording |
@dovanhuong If you look at the micmon.audio
package you'll find an AudioFile
object that exposes a similar interface as AudioDevice
. So just replacing AudioDevice
with AudioFile
(and specifying a source file instead of a source device) should suffice.
@blacklight thank you for your advice, I'll take a look on it!
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
hello, I have a question, how can we inference model with audio file instead of microphone?