Broadcasting multiple tracks over RTMP

We have inbound audio and video on leg A (from verto) and outbound audio from leg B, and we would like to stream all of these in separate tracks over RTMP. We currently mix them to fewer tracks by using a conference, but we want to stop doing that and keep the tracks separate.

I looked in mod_rtmp | FreeSWITCH Documentation but did not find any details about track separation. (Btw, the content looks very nice on your new site compared to confluence.)

Do we need to change legA from audio=sendrecv to separate tracks for send and recv? Do we need to configure FS in a special way?

Try recording in stereo, it should put one party on the left channel and one party on the right channel.


Our dialplan already included
<action application="set" data="RECORD_STEREO=true" inline="true"/>
and we found that the audio of both legs is on both tracks (using FS 1.10.7). Is there a way to keep the audio of the legs separate in RTMP?

I do not believe so at this time, which module are you using to broadcast with?


We use record_session to send rtmp to a nearby server. I would have guessed that uses mod_rtmp. How would I check?

mod_rtmp doesn’t do that. you can simply verify by unload mod_rtmp and it should still work. It should be mod_av and I never tested stereo but if you searching for 44100 in the code you’ll find some params related to rtmp hardcoded in a few places.

We have changed those hard-coded values, fyi, in our local builds and the streams end up at the desired sampling rate.

Does it seem feasible for newbies to edit the same file to separate the audio by leg for stereo?

Seems some of those should be able to be overiden with {} params on the file. Maybe we didn’t expose those.

A possibly related problem is that on rare occasions the RTMP video has two side-by-side views of leg A. At first we thought this was due to phone browsers sending narrow frames that the conference video canvas “tiled” to fill the canvas. But it’s only occasional from phone browsers, and we see it from Windows and macOS browsers sometimes also. We’re trying to find a way to replicate consistently.

Re: tiling in the RTMP video stream, we see it consistently with iPhone SE 2020 and iPhone 13 when in portrait orientation. But there is no tiling in the mp4 recording.
Is there a way to configure the RTMP canvas so it doesn’t tile nor stretch?

There is no way to accomplish what you want with that method. I’m not sure the best approach that makes sense.


I had a look at avformat.c, and I wondered if fill_avframe is the one that causes tiling.

(I wanted to include a permalink to the line in github but I’m not permitted.)

In what file(s) does FS compose an RTMP stream? Is it avformat.c or upstream from that?

Recording to mp4 does not result in tiling when using a device in portrait orientation, but broadcasting via RTMP does.

We’d like to inspect the same files to see if the stream can be given one audio track per call leg.