If anyone is using/testing WebRTC I would love to hear how it is working for them :) I am hoping Simulcast makes a impact with smaller streamers/site operators.
* Cheaper servers. More competition and I want to see people running their own servers.
* Better video quality. Encoding from source is going to be better then transcoding.
* No more bad servers. Send video to your audience and server isn't able to do modification/surveillance with E2E Encryption via WebRTC.
* Better Latency. No more time lost transcoding. I love low latency streaming where people are connected to community. Not just blasting one-way video.
I would love to host an ultra high quality stream on my own web server, and then have that exact stream piped to YouTube live via OBS. Is there an easy way to do that now?
YouTube likely won't support streaming 3440x1440 60FPS video, and while discord technically supports it, they usually compress the footage fairly aggressively once it's sent up to the client, so I'd like to host my own; it only needs to support a few people. I wouldn't mind hosting it so my friends and side project partners can watch me code and play games in high quality.
You can try it out at https://b.siobud.com to see if you like it first. It if fits your needs then go for the self host :) I run my instance on Hetzner
I want to add more features to it, but I have been focused on OBS mostly lately. If you have any ideas/needs that would make it work for you and your friends I would love to hear! Join the discord and would love to chat.
What I want to do next is make a 're-broadcast feature'. So friends can stream to it + hang out. When they are ready they hit a button and then goes to Twitch/YouTube etc...
I am hoping this space improves, I wanted to cast video to watch some stuff with friends last year and the software to accomplish this now is both really heavy (does EVERY part of the process need to run http server?) and convoluted.
We ended up just doing a discord screen share, which evaded all the tunnelling/transcoding/etc issues which made us give up on WebRTC.
Around last year I was using some custom plugins for OBS, I haven't used Broadcast Box but I can pick it up to try sometime later.
> If that is still too heavy, what could I do to make it better?
I haven't picked it up yet to see whether it's complex enough to really need it but it has the same pain point a few priors did--being yet another service which I must configure via the browser and so it has to run an entire frontend for doing that rather than being able to do config files.
I'd never heard of vdo.ninja. It sounds like the base use case, according to their main page, is the opposite? Phone to webrtc:
> In its simplest form, VDO.Ninja brings live video from a smartphone, tablet, or remote computer, directly into OBS Studio or other browser-enabled software.
I really hope I'm processing what they're saying incorrectly but this sure sounds like they are doing a video encode for each peer, which is madness & obviously bad.
VDO.Ninja is a peer-to-peer system. This means for each new person viewing your feed, a new encode is processed. It also is CPU bound since encoding usually takes place on the CPU. Take care not to overload your system. Keep an eye on your CPU usage.
The intro video also emphasizes that each person has to send video to all peers, that in fact it's not about sending to OBS, it's about having people in a room. And warns that room size of 10 is about as good as you'll get. Seemingly because of these limits.
But if it does what the original purpose states, of streaming to OBS (a single consumer), it doesn't really matter. I am piqued to see how it handles maybe sending multiple people's streams to OBS: if that's what the room is for that's very rad (even if weird anti-efficient at it?)!
I really like the idea of web based tools for video capture. And for some video production. It's cool that vdo.ninja is here. But what the heck; this sounds not good.
Also I find it a weird claim that anyone would have heard of vdo.ninja but not webrtc. 3 results for https://hn.algolia.com/?q=vdo.ninja , about a thousand for webrtc. Always an interesting world, interesting people.
I've been waiting for the WHEP support PR to be merged so I can input video from a stream into OBS and mix it before outputting it again with WHIP. Or am I thinking about it wrong ?
The WebRTC complexity came from our pipeline being ffmpeg → H.264 RTP over UDP → pion/webrtc TrackLocalStaticRTP (instead of a “normal” WebRTC source). Any time we changed monitor/crop or restarted the capture, the RTP stream effectively reset (SSRC/seq/timestamps and sometimes SPS/PPS cadence), and mobile browsers can stall the decoder and just stay black. We added “restart/renegotiation” because recreating the PeerConnection is the most reliable way to recover from those discontinuities.
What we still need to debug to make WebRTC solid:
Capture-side: full ffmpeg stderr logs + exact args when it goes black.
RTP ingest: log SSRC/PT/seq gaps and verify SPS/PPS are regularly re-sent (e.g., with every keyframe).
WebRTC states: log signaling/ICE/connection state transitions to catch races and “remote description not set” timing.
Confirm whether the black screen is a capture issue vs a decode/packetization issue (capture works via MJPEG, so likely the latter).
Instead of restart/renegotiation can you re-timestamp the packets? The example swap-tracks[0] shows a good way to do that. The renegotiation (especially multiple times with no real changes) is gonna be a PITA :)
Also you should share in https://pion.ly/discord other people would love to see this. Super cool project.
You’re right that re-timestamping is the proper way to avoid renegotiation, and the swap-tracks example is exactly the direction to take. In our case, monitor/crop changes usually required restarting ffmpeg, which often reset more than just timestamps (SSRC, sequence continuity, SPS/PPS timing), so renegotiation became the brute-force fallback.
That said, I’m definitely going to try your recommendation and experiment with re-timestamping / track swapping to see how far we can get without renegotiation, especially on mobile browsers. Thanks as well for the Discord link, I’ll share the project there. Appreciate the concrete pointers.
Thanks! And wow, getting a kind word from a Pion maintainer means a lot. Your library made this whole thing possible. The datachannel API is incredibly clean to work with. Appreciate you and the team's work on it.
You have done such an amazing job on this, it's been such a joy watching you improve SCTP. Better Bandwidth Estimation has been the big list of things everyone has been wishing could be better for 5+ years.
I can't wait to see where your going/what you end up building :) I absolutely didn't have the skills you had at this point in my career.
-----
* What is the best way to get people interested/involved in Open Source/contributing? I am always looking to attract more people. What is Pion doing right and what could it be doing better?
* Doing this project what was the most surprising technical thing you learned? I haven't gone deep on Bandwidth Estimation, but it's super cool to me that loss + latency are really just heuristics. I get why it isn't possible, but it's a shame middle boxes can't just tell software 'You can send this much' :)
Thank you!! It wouldn't have been possible without your enthusiasm and encouragement!
I think a lot of people are interested in making cool things but bridging the gap between looking at the project from the outside and actually doing something can be scary, especially for people who are contributing to open-source projects for the first time. For people who are extremely new, it helps a lot to either have direct guidance from someone with experience ("synchronous" due to timely feedback) and/or reading/video educational material ("async" due to something that can be created once and used multiple times after its creation). With regards to Pion, I was lucky to have both kinds: constant feedback from Jo, plenty of material to read via https://webrtcforthecurious.com/ , RFCs, the countless issues in SCTP; it felt a little bit like I stumbled into a lab with all the tools at my disposal and all I had to do was put the pieces together, come up with a plan, and go for it.
Also the https://github.com/pion/webrtc/wiki/Projects-Worth-Doing wiki page was very helpful but this could also be something that we can improve on: having a better understanding of what we want to get done and when we want them to get done (roughly speaking) is good. Updating it can be a bit of a chore though so I wish there could be a more efficient way to keep track of things while getting things done. I didn't struggle with it as much myself but that was mostly because Jo and I were working together nonstop for the last few months, but I can see how it could potentially be confusing for newer folks. It would also be good to mark which issues/features are blocked by others, which is something that I added in SCTP's readme, which has been nice. One thing I wonder about is whether the older parts of the codebase are daunting, but our efforts to put together resources regarding what RFCs are relevant has been great. I wonder if the video streaming world could have some sort of public tracker for different tools/technologies for people to learn about. I feel like that might be the biggest bottleneck for us, especially compared to other specialized software fields.
As for technical things, there are definitely a couple: learning that Windows' networking doesn't play as nicely as Linux's has been a bit of a nightmare, I'm surprised it even works at all haha. Another is how little attention RACK has received; the dissertation (https://duepublico2.uni-due.de/servlets/MCRFileNodeServlet/d...) doesn't seem to have garnered much attention despite having a really big impact. I also think that it's really surprising to be able to sit down and deal with packets and remember that everything is just a bunch of packets and the way we keep track of things we've sent or received is entirely governed by whatever rules we come up with. It feels oddly primitive but very valuable to be able to learn about why and how we want to track whatever heuristics we come up with!
I worked on a project that started with VNC and had lots of problems. Slow connect times and backpressure/latency. Switching to neko was quick/easy win.
if you want something more lightweight... rustdesk has been great for me, it supports multiple adaptable video codecs and can optimize for latency vs image quality.
You can run all WebRTC traffic over a single port. It’s a shame you spent so much time/were frustrated by ICE errors
That’s great you got something better and with less complexity! I do think people push ‘you need UDP and BWE’ a little too zealously. If you have a homogeneous set of clients stuff like RTMP/Websockets seems to serve people well
This is something I have been working on for years and am sssooo excited to see merged. This starts a new generation of broadcasting (I hope)
* Cheaper servers. More competition and I want to see people running their own servers.
* Better video quality. Encoding from source is going to be better then transcoding.
* No more bad servers. Send video to your audience and server isn't able to do modification/surveillance with E2E Encryption via WebRTC.
* Better Latency. No more time lost transcoding. I love low latency streaming where people are connected to community. Not just blasting one-way video.
----
Please please test it out! I want to catch any bugs/make improvements before branch cut.
I don't understand how adding WebRTC/Whip enables the bullet points you listed. Is the idea that we'll transcode locally and because of that, we can set up a cheap host? How does this impact the level of hardware we'll need locally?
This is something I have been working on for years and am sssooo excited to see merged. This starts a new generation of broadcasting (I hope)
* Cheaper servers. More competition and I want to see people running their own servers.
* Better video quality. Encoding from source is going to be better then transcoding.
* No more bad servers. Send video to your audience and server isn't able to do modification/surveillance with E2E Encryption via WebRTC.
* Better Latency. No more time lost transcoding. I love low latency streaming where people are connected to community. Not just blasting one-way video.
----
Please please test it out! I want to catch any bugs/make improvements before branch cut.
* Cheaper servers. More competition and I want to see people running their own servers.
* Better video quality. Encoding from source is going to be better then transcoding.
* No more bad servers. Send video to your audience and server isn't able to do modification/surveillance with E2E Encryption via WebRTC.
* Better Latency. No more time lost transcoding. I love low latency streaming where people are connected to community. Not just blasting one-way video.
reply