Forked from Haerezis/libjitsi_bundle_streams.java
Created
September 13, 2016 16:52
-
-
Save alexcmd/dd27c78024015798b8d31977eef4ba79 to your computer and use it in GitHub Desktop.
An example to show how to bundle RTP streams with ice4j and libjitsi
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
LibJitsi.start(); | |
MediaService mediaService = LibJitsi.getMediaService(); | |
//I assume that I have a working video and audio MediaDevice | |
MediaDevice randomVideoDevice = createVideoDevice(); | |
MediaDevice randomAudioDevice = createAudioDevice(); | |
//I create the MediaFormat for each stream | |
MediaFormat videoFormat | |
= mediaService.getFormatFactory().createMediaFormat("vp8"); | |
MediaFormat audioFormat | |
= mediaService.getFormatFactory().createMediaFormat("opus"); | |
/* | |
I create 2 MediaStream, one with the video device, the other with the audio device. | |
I give a null reference for the StreamConnector because I can set it | |
later, but I can't set the SrtpControl later. That's why I'm using this constructor. | |
The SrtpControl will be the same for the 2 streams because they will share it, | |
so it needs to be created before | |
*/ | |
SrtpControl sameSrtpControl = mediaService.createSrtpControl(SrtpControlType.DTLS_SRTP); | |
MediaStream videoMediaStream = mediaService.createMediaStream( | |
null, | |
randomVideoDevice, | |
sameSrtpControl); | |
videoMediaStream.addDynamicRTPPayloadType((byte)100,videoFormat); | |
MediaStream audioMediaStream = mediaService.createMediaStream( | |
null, | |
randomAudioDevice, | |
sameSrtpControl); | |
//Even streams set to SENDONLY can receive data, but in that case, playback is not possible. | |
//Here I set the streams to SENDRECV because they will send and receive rtp packet (duh...). | |
videoMediaStream.setDirection(MediaDirection.SENDRECV); | |
audioMediaStream.setDirection(MediaDirection.SENDRECV); | |
//I set the format of the rtp stream. | |
videoMediaStream.setFormat(videoFormat); | |
audioMediaStream.setFormat(audioFormat); | |
//////////////////////////////////////////// | |
//Starting from here, I set up the DTLS for the stream | |
//If DTLS does not interest you, you can skip this part | |
DtlsControl dtlsControl = (DtlsControl) sameSrtpControl; | |
/* | |
You can also write : | |
DtlsControl sameControl = (DtlsControl) videoMediaStream.getSrtpControl(); | |
or | |
DtlsControl sameControl = (DtlsControl) audioMediaStream.getSrtpControl(); | |
normally they should return the same SrtpControl | |
*/ | |
//See rfc 4145 to understand was Setup is (here I choose it randomly) | |
dtlsControl.setSetup(DtlsControl.Setup.PASSIVE); | |
//Normally, you need to set the fingerprint and the hash function of the remote target | |
//(and also send yours to the remote target). | |
//For now, libjitsi doesn't absolutely need this, so you can skip it if you want | |
//But that's way cleaner to do it. | |
//I assume here that I already receive the remote fingerprint/hash function | |
Map<String,String> dtlsMap = new HashMap<String,String>(); | |
//sendFingerprint take care of sending your fingerprint and hash function | |
//to the remote target (this is a "fake" function) | |
sendFingerprint(dtlsControl.getLocalFingerprint(), | |
dtlsControl.getLocalFingerprintHashFunction()); | |
//the remote target also use just one connexion for RTP, so you get one fingerprint for both audio and video.. | |
dtlsMap.put( | |
remoteVideoAndAudioStreamHashFunction, | |
remoteVideoAndAudioStreamFingerprint); | |
//The DtlsControl need a Map of hash functions and their corresponding fingerprints | |
//that have been presented by the remote endpoint via the signaling path | |
dtlsControl.setRemoteFingerprints(dtlsMap); | |
////////////////////////////////////////////// | |
// End of the DTLS set up | |
////////////////////////////////////////////// | |
//ICE SETUP | |
//I use the function I wrote for http://blog.sharedmemory.fr/en/2014/06/22/gsoc-2014-part-1-ice4j/ | |
//I create an Agent containing only one IceMediaStream that will handle the audio/video stream | |
Set<String> nameSet = new HashSet<String>(); | |
nameSet.add("audio_video"); | |
Agent iceAgent = generateIceMediaStream(nameSet,stunAddresses,turnAddresses); | |
//YOU NEED TO SEND YOUR ICE CREDENTIALS BEFORE STARTING ICE | |
//But how you do it depends on what signaling protocol you use. | |
//But you need to do it here, before the next instruction. | |
//Start the ICE process | |
agent.startConnectivityEstablishment(); | |
//Running the ICE process doesn't block the tread, so you can do whatever you want until it's terminated, | |
//but you mustn't use the sockets the Agent created, not before ICE terminates. | |
//Here I decide to wait sleep until ICE terminates. | |
while(IceProcessingState.TERMINATED != agent.getState()) | |
{ | |
System.out.println("Connectivity Establishment in process"); | |
try | |
{ | |
Thread.sleep(1500); | |
} | |
catch (Exception e) | |
{ | |
e.printStackTrace(); | |
} | |
} | |
////////////////////////////////////////////// | |
//END OF ICE SETUP | |
IceMediaStream iceMediaStream = iceAgent.getStream("audio_video"); | |
CandidatePair rtpPair = iceMediaStream.getComponent(Component.RTP).getSelectedPair(); | |
CandidatePair rtcpPair = iceMediaStream.getComponent(Component.RTCP).getSelectedPair(); | |
DatagramSocket rtpSocket = rtpPair.getLocalCandidate().getDatagramSocket(); | |
DatagramSocket rtcpSocket = rtcpPair.getLocalCandidate().getDatagramSocket(); | |
//I use the same DatagramSocket for both streams | |
StreamConnector videoConnector | |
= new DefaultStreamConnector( | |
rtpSocket, | |
rtcpSocket); | |
StreamConnector audioConnector | |
= new DefaultStreamConnector( | |
rtpSocket, | |
rtcpSocket); | |
/* | |
Its possible that you don't need 2 StreamConnector : you may just need to create one, with | |
rtpSocket and rtcpSocket, and then you set the same StreamConnector to both streams | |
-->To try | |
*/ | |
videoMediaStream.setConnector(videoConnector); | |
audioMediaStream.setConnector(audioConnector); | |
//We set the "name"/type of the stream (here "video" then "audio") | |
videoMediaStream.setName(MediaType.VIDEO.toString()); | |
audioMediaStream.setName(MediaType.AUDIO.toString()); | |
//We create a MediaStreamTarget for each streams with the same endpoints to bundle the stream | |
//It's possible that you just only need to create one MediaStreamTarget that you set for both streams (-->To try). | |
videoMediaStream.setTarget( | |
new MediaStreamTarget( | |
rtpPair.getRemoteCandidate().getTransportAddress(), | |
rtcpPair.getRemoteCandidate().getTransportAddress()) ); | |
audioMediaStream.setTarget( | |
new MediaStreamTarget( | |
rtpPair.getRemoteCandidate().getTransportAddress(), | |
rtcpPair.getRemoteCandidate().getTransportAddress()) ); | |
//When you start the SrtpControl, you need to give a MediaType, but in this prototype example, the SrtpControl handle multiple MediaType. | |
//It doesn't seems like libjitsi use the MediaType in the SrtpControl, but I can't be sure at 100%. | |
//Here I'll just start with a random MediaType | |
sameSrtpControl.start(MediaType.VIDEO); | |
//You can finally start the stream, the target should quickly receive the first rtp packets | |
videoMediaStream.start(); | |
audioMediaStream.start(); |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment