Last active
June 26, 2021 07:34
-
-
Save acro5piano/6f16fa332416479b9edadccc71b4bc25 to your computer and use it in GitHub Desktop.
Background blur-ed Twilio video track
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
export const MyComponent() { | |
const { toggleVideoBlur, isVideoBlur } = useBlurLocalTrack({}) | |
const _toggleVideoBlur = async () => { | |
twilioRoom.localParticipant.videoTracks.forEach((track) => track.unpublish()) | |
const videoTrack = await toggleVideoBlur() | |
await wait(1000) | |
participant.publishTrack(videoTrack) | |
} | |
return <button onclick={_toggleVideoBlur}>{isVideoBlur ? 'Disable Blur' : 'Enable blur'}</button> | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
<!-- [Tensorflow (for video blur)] --> | |
<!-- CDN Install is the easiest way (Webpack + npm creates error) --> | |
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]"></script> | |
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/[email protected]"></script> | |
<canvas id="canvas" class="renderer" width="640" height="480"></canvas> | |
<video id="video" class="renderer" width="640" height="480"></video> | |
<!-- [/Tensorflow] --> | |
</body> | |
</html> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import { useCallback, useState } from 'react' | |
import { wait } from 'src/utilities/wait' | |
import { LocalVideoTrack } from 'twilio-video' | |
declare global { | |
export interface HTMLCanvasElement { | |
captureStream(frameRate: number): MediaStream | |
} | |
var bodyPix: typeof import('@tensorflow-models/body-pix') | |
} | |
interface UseBlurLocalTrackArgs { | |
onCreateVideoTrack?: (track: LocalVideoTrack) => void | |
} | |
// TODO: Someday we'll create an OSS for this | |
export const useBlurLocalTrack = ({ | |
onCreateVideoTrack, | |
}: UseBlurLocalTrackArgs) => { | |
const [isVideoBlur, setIsVideoBlur] = useState(false) | |
const toggleVideoBlur = useCallback(async () => { | |
setIsVideoBlur(!isVideoBlur) | |
const canvas = document.getElementById('canvas') as HTMLCanvasElement | |
const video = document.getElementById('video') as HTMLVideoElement | |
const stream = await navigator.mediaDevices.getUserMedia({ | |
video: true, | |
audio: false, | |
}) | |
if (isVideoBlur) { | |
video.pause() | |
video.srcObject = null | |
const videoTrack = new LocalVideoTrack(stream.getVideoTracks()[0]) | |
onCreateVideoTrack && onCreateVideoTrack(videoTrack) | |
return videoTrack | |
} | |
const net = await bodyPix.load({ | |
multiplier: 0.75, | |
quantBytes: 2, | |
architecture: 'MobileNetV1', | |
outputStride: 16, | |
}) | |
video.addEventListener('play', () => { | |
async function step() { | |
const segmentation = await net.segmentPerson(video) | |
await bodyPix.drawBokehEffect( | |
canvas, | |
video, | |
segmentation, | |
6, // backgroundBlurAmount | |
2, // edgeBlurAmount | |
false, // flipHorizontal | |
) | |
// Add delay to decrease frame rate for performance reason | |
setTimeout(() => { | |
requestAnimationFrame(step) | |
}, 80) // TODO: make this value configurable based on user device | |
} | |
requestAnimationFrame(step) | |
}) | |
video.srcObject = stream | |
// Add delay to wait for video strem is ready | |
await wait(1000) | |
await video.play() | |
// This delay is important for Firefox! | |
await wait(1000) | |
const localVideoTrack = new LocalVideoTrack( | |
canvas.captureStream(10).getVideoTracks()[0] | |
) | |
onCreateVideoTrack && onCreateVideoTrack(localVideoTrack) | |
return localVideoTrack | |
}, [isVideoBlur, onCreateVideoTrack]) | |
return { toggleVideoBlur, isVideoBlur } | |
} |
It should require a hard work and I'm not interested in it. Please fit this example into your project.
Basically this code does
- Capture video from webcam using
getUserMedia
- Render the video
- Render the video to canvas from the video tag
- Blur background of canvas data
- Create a new video track from the canvas
So the data flow is like this:
WebCam -> VideoStream -> video -> canvas -> VideoStream -> VideoTrack
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Can you show an example of this being used in twilio-video-app-react?