Skip to content

Instantly share code, notes, and snippets.

@gharia
Created October 16, 2020 05:29
Show Gist options
  • Save gharia/b128211d0cb8295a20cfed5bbe47a38b to your computer and use it in GitHub Desktop.
Save gharia/b128211d0cb8295a20cfed5bbe47a38b to your computer and use it in GitHub Desktop.
# room-<unique room ID>: {
# description = "This is my awesome room"
# is_private = true|false (whether this room should be in the public list, default=true)
# secret = "<optional password needed for manipulating (e.g. destroying) the room>"
# pin = "<optional password needed for joining the room>"
# sampling_rate = <sampling rate> (e.g., 16000 for wideband mixing)
# audiolevel_ext = true|false (whether the ssrc-audio-level RTP extension must
# be negotiated/used or not for new joins, default=true)
# audiolevel_event = true|false (whether to emit event to other users or not, default=false)
# audio_active_packets = 100 (number of packets with audio level, default=100, 2 seconds)
# audio_level_average = 25 (average value of audio level, 127=muted, 0='too loud', default=25)
# default_prebuffering = number of packets to buffer before decoding each particiant (default=6)
# record = true|false (whether this room should be recorded, default=false)
# record_file = "/path/to/recording.wav" (where to save the recording)
#
# The following lines are only needed if you want the mixed audio
# to be automatically forwarded via plain RTP to an external component
# (e.g., an ffmpeg script, or a gstreamer pipeline) for processing
# By default plain RTP is used, SRTP must be configured if needed
# rtp_forward_id = numeric RTP forwarder ID for referencing it via API (optional: random ID used if missing)
# rtp_forward_host = "<host address to forward RTP packets of mixed audio to>"
# rtp_forward_host_family = "<ipv4|ipv6; by default, first family returned by DNS request>"
# rtp_forward_port = port to forward RTP packets of mixed audio to
# rtp_forward_ssrc = SSRC to use to use when streaming (optional: stream_id used if missing)
# rtp_forward_codec = opus (default), pcma (A-Law) or pcmu (mu-Law)
# rtp_forward_ptype = payload type to use when streaming (optional: only read for Opus, 100 used if missing)
# rtp_forward_srtp_suite = length of authentication tag (32 or 80)
# rtp_forward_srtp_crypto = "<key to use as crypto (base64 encoded key as in SDES)>"
# rtp_forward_always_on = true|false, whether silence should be forwarded when the room is empty (optional: false used if missing)
#}
general: {
#admin_key = "supersecret" # If set, rooms can be created via API only
# if this key is provided in the request
#lock_rtp_forward = true # Whether the admin_key above should be
# enforced for RTP forwarding requests too
#lock_play_file = true # Whether the admin_key above should be
# enforced for playing .opus files too
#record_tmp_ext = "tmp" # Optional temporary extension to add to filenames
# while recording: e.g., setting "tmp" would mean
# .wav --> .wav.tmp until the file is closed
events = true # Whether events should be sent to event
# handlers (default=true)
# By default, integers are used as a unique ID for both rooms and participants.
# In case you want to use strings instead (e.g., a UUID), set string_ids to true.
#string_ids = true
}
room-2244: {
description = "Demo Room"
secret = "adminpwd"
sampling_rate = 48000
record = false
#record_file = "/path/to/recording.wav"
#rtp_forward_id = 1
#rtp_forward_host = "127.0.0.1"
#rtp_forward_port = 5002
#rtp_forward_ptype = 111
#rtp_forward_always_on = true
}
# stream-name: {
# type = rtp|live|ondemand|rtsp
# rtp = stream originated by an external tool (e.g., gstreamer or
# ffmpeg) and sent to the plugin via RTP
# live = local file streamed live to multiple listeners
# (multiple listeners = same streaming context)
# ondemand = local file streamed on-demand to a single listener
# (multiple listeners = different streaming contexts)
# rtsp = stream originated by an external RTSP feed (only
# available if libcurl support was compiled)
# id = <unique numeric ID> (if missing, a random one will be generated)
# description = This is my awesome stream
# metadata = An optional string that can contain any metadata (e.g., JSON)
# associated with the stream you want users to receive
# is_private = true|false (private streams don't appear when you do a 'list'
# request)
# secret = <optional password needed for manipulating (e.g., destroying
# or enabling/disabling) the stream>
# pin = <optional password needed for watching the stream>
# filename = path to the local file to stream (only for live/ondemand)
# audio = true|false (do/don't stream audio)
# video = true|false (do/don't stream video)
# The following options are only valid for the 'rtp' type:
# data = true|false (do/don't stream text via datachannels)
# audioport = local port for receiving audio frames
# audiortcpport = local port, if any, for receiving and sending audio RTCP feedback
# audiomcast = multicast group port for receiving audio frames, if any
# audioiface = network interface or IP address to bind to, if any (binds to all otherwise)
# audiopt = <audio RTP payload type> (e.g., 111)
# audiortpmap = RTP map of the audio codec (e.g., opus/48000/2)
# audioskew = true|false (whether the plugin should perform skew
# analisys and compensation on incoming audio RTP stream, EXPERIMENTAL)
# videoport = local port for receiving video frames
# videortcpport = local port, if any, for receiving and sending video RTCP feedback
# videomcast = multicast group port for receiving video frames, if any
# videoiface = network interface or IP address to bind to, if any (binds to all otherwise)
# videopt = <video RTP payload type> (e.g., 100)
# videortpmap = RTP map of the video codec (e.g., VP8/90000)
# videobufferkf = true|false (whether the plugin should store the latest
# keyframe and send it immediately for new viewers, EXPERIMENTAL)
# videosimulcast = true|false (do|don't enable video simulcasting)
# videoport2 = second local port for receiving video frames (only for rtp, and simulcasting)
# videoport3 = third local port for receiving video frames (only for rtp, and simulcasting)
# videoskew = true|false (whether the plugin should perform skew
# analisys and compensation on incoming video RTP stream, EXPERIMENTAL)
# videosvc = true|false (whether the video will have SVC support; works only for VP9-SVC, default=false)
# collision = in case of collision (more than one SSRC hitting the same port), the plugin
# will discard incoming RTP packets with a new SSRC unless this many milliseconds
# passed, which would then change the current SSRC (0=disabled)
# dataport = local port for receiving data messages to relay
# dataiface = network interface or IP address to bind to, if any (binds to all otherwise)
# datatype = text|binary (type of data this mountpoint will relay, default=text)
# databuffermsg = true|false (whether the plugin should store the latest
# message and send it immediately for new viewers)
# threads = number of threads to assist with the relaying part, which can help
# if you expect a lot of viewers that may cause the RTP receiving part
# in the Streaming plugin to slow down and fail to catch up (default=0)
#
# In case you want to use SRTP for your RTP-based mountpoint, you'll need
# to configure the SRTP-related properties as well, namely the suite to
# use for hashing (32 or 80) and the crypto information for decrypting
# the stream (as a base64 encoded string the way SDES does it). Notice
# that with SRTP involved you'll have to pay extra attention to what you
# feed the mountpoint, as you may risk getting SRTP decrypt errors:
# srtpsuite = 32
# srtpcrypto = WbTBosdVUZqEb6Htqhn+m3z7wUh4RJVR8nE15GbN
#
# The Streaming plugin can also be used to (re)stream media that has been
# encrypted using something that can be consumed via Insertable Streams.
# In that case, we only need to be aware of it, so that we can send the
# info along with the SDP. How to decrypt the media is out of scope, and
# up to the application since, again, this is end-to-end encryption and
# so neither Janus nor the Streaming plugin have access to anything.
# DO NOT SET THIS PROPERTY IF YOU DON'T KNOW WHAT YOU'RE DOING!
# e2ee = true
#
# The following options are only valid for the 'rstp' type:
# url = RTSP stream URL (only for restreaming RTSP)
# rtsp_user = RTSP authorization username (only if type=rtsp)
# rtsp_pwd = RTSP authorization password (only if type=rtsp)
# rtspiface = network interface or IP address to bind to, if any (binds to all otherwise), when receiving RTSP streams
# rtsp_failcheck = whether an error should be returned if connecting to the RTSP server fails (default=true)
#
# Notice that, for 'rtsp' mountpoints, normally the plugin uses the exact
# SDP rtpmap and fmtp attributes the remote camera or RTSP server sent.
# In case the values set remotely are known to conflict with WebRTC viewers,
# you can override both using the settings introduced above.
#
# To test the 'gstreamer-sample' example, check the test_gstreamer.sh
# script in the plugins/streams folder. The live and on-demand audio
# file streams, use a couple of files (radio.alaw, music.mulaw) that are
# provided in the plugins/streams folder.
#}
general: {
#admin_key = "supersecret" # If set, mountpoints can be created via API
# only if this key is provided in the request
#rtp_port_range = "20000-40000" # Range of ports to use for RTP/RTCP when '0' is
# passed as port for a mountpoint (default=10000-60000)
#events = true # Whether events should be sent to event
# handlers (default=true)
# By default, integers are used as a unique ID for both mountpoints. In case
# you want to use strings instead (e.g., a UUID), set string_ids to true.
#string_ids = true
}
#
# This is an example of an RTP source stream, which is what you'll need
# in the vast majority of cases: here, the Streaming plugin will bind to
# some ports, and expect media to be sent by an external source (e.g.,
# FFmpeg or Gstreamer). This sample listens on 5002 for audio (Opus) and
# 5004 for video (VP8), which is what the sample gstreamer script in the
# plugins/streams folder sends to. Whatever is sent to those ports will
# be the source of a WebRTC broadcast users can subscribe to.
#
#rtp-sample: {
# type = "rtp"
# id = 1
# description = "Opus/VP8 live stream coming from external source"
# metadata = "You can use this metadata section to put any info you want!"
# audio = true
# video = true
# audioport = 5002
# audiopt = 111
# audiortpmap = "opus/48000/2"
# videoport = 5004
# videopt = 100
# videortpmap = "VP8/90000"
# secret = "adminpwd"
#}
squawk-english: {
type = "rtp"
id = 1
description = "Opus live stream coming from Squawk english room"
metadata = "Squawk english stream for realtime audio updates related to headlines, price movement and rumors as stories develop"
audio = true
video = false
audioport = 5002
audiopt = 111
audiortpmap = "opus/48000/2"
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment