Skip to content

Instantly share code, notes, and snippets.

@siddMahen
Last active August 29, 2015 14:00
Show Gist options
  • Select an option

  • Save siddMahen/11191670 to your computer and use it in GitHub Desktop.

Select an option

Save siddMahen/11191670 to your computer and use it in GitHub Desktop.
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<script>
navigator.webkitGetUserMedia({audio:true}, function(s){
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContext = new window.AudioContext();
var mediaStreamSource = audioContext.createMediaStreamSource(s);
var analyser = audioContext.createAnalyser();
analyser.smoothingTimeConstant = 0.10;
mediaStreamSource.connect(analyser);
//analyser.connect(audioContext.destination);
setInterval(function(){
var data = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(data);
console.log(data);
// find a better way to display this...
console.log(analyser.frequencyBinCount);
}, 1000);
// source -> analyser -> output
});
</script>
</body>
</html>

plan

goal: train a classifier to differentiate between the spoken words "yes" and "no", with high success

method outline

preliminary steps:

  • extract FFT data from webpage microphone, copy to file
  • write a classifier in julia
  • ensure the classifier works well

future steps:

  • port julia classifier to the browser
  • make everything run in the browser

offshoot:

  • have speech analysis always running on localhost
  • send speech commands to node.js sever on localhost
  • have the node.js serve do stuff
  • hubot for speech-ish
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment