My understanding was that WebRTC wasn't specifically designed for video and voice applications exclusively, but rather as a generic real time, peer to peer communication framework that included capabilities for voice and video. Not trying to take away from your comment or anything, but maybe I misunderstand WebRTC.
I recently implemented a WebRTC multiplayer game in Godot, and yeah, it has very little to do with chat, and isn't really "off the shelf", it took a hell of a lot of configuration, plus running and maintaining a STUN/TURN server and backend lobby/matchmaking system. It's no more a chat protocol than HTTP really when you're doing that kind of programming with it, just has some nifty tricks for getting around routers and making a direct connection where it can.
That said, it was better than NAT punch-through and/or UPNP which are not very reliable these days.
Correct me if I'm wrong. STUN/TURN is the one that is doing the UDP punching. And in the event that fails, all the data are being passed through the STUN/TURN server, which makes it just like client/server setup where the server became the bottle neck again.
Stun is a protocol to query the network about the topology. Turn is a protocol to route through a third party when p2p fails. Both of these are part of the ICE methodologies.
STUN is basically a way to know your external IP address to share to others and have both peers try to connect to one another through them. It's lightweight.
If that fails, then there's TURN which acts like a proxy between the peers.
Yeah. It's specifically designed for both AV, and for generic datagrams. You have to choose one or the other during initialization of a stream (but can have multiple streams between clients).
tl;dr using WebRTC just for realtime client<->server data sucks, but WebTransport[1] is coming soon to serve that exact usecase with an easy API
WebRTC has data channels, which are currently the only way to achieve unreliable and unordered real-time communication (UDP-style) between the browser and other browsers or a server. This is pretty essential for any networked application where latency is critical, like voice and video and fast-paced multiplayer games.
As other commenters have noted, it's a royal pain in the ass to set up WebRTC if all you want is UDP-style communication between a server and browser, since you need to wrangle half a dozen other protocols in the process.
However! A new API, WebTransport[1], is actively being developed that will offer a WebSockets-like (read: super simple to set up) API for UDP-style communication. I am extremely excited about it and its potential for real-time browser-based multiplayer games (which I'm working on).
What do you find hard about setting up a WebRTC server? I hear from users they are able to spin up a Pion DataChannel server in 5 mins (including installing Go)
WebTransport isn’t going to be here soon though, I would be cautious in investing. Stuff like Congestion Control is still a big unknown [0] and we don’t have Datagrams everywhere.
I just starred it, it does seem like it'd alleviate a some of the pain. Another similar (more end to end) project I had come across is: geckos.io. This previous HN thread[1] discusses some of the (at least perceived) difficulty with using WebRTC as a "WebSockets but UDP" solution.
I'm not sure I follow the concern about congestion control. UDP doesn't have it either, presumably since if you're building an application that requires such latency sensitivity, you don't mind rolling your own congestion control algorithm that makes most sense given your application's specific needs, right? Pretty much all FPS games do this, as far as I understand.
I just don't know what a 'ROM game command' is. Seems more likely it would be controller inputs to one person's emulator? Or perhaps everyone has an emulator and the inputs are simply shared.
It’s probably really simple: if you normally load a ROM game and start a multiplayer game you control all the other players input. This simply goes next step and makes other inputs be controlled by other players.
Rollback networking is essentially event sourcing. Game states are immutable, and new game states are derived from adding inputs (events).
You keep the last dozen game states around in memory, and if you receive an input from the past, you rewind to the last game state prior, add it to your input stream, and fast forward to the present.
It has the same base advantages and drawbacks as RTS networking - the core logic is written as though the game is single player, and complexity can be scaled arbitrarily without bloating bandwidth requirements.
But in addition, you get the benefit of zero input latency (play a multiplayer RTS game and send a unit around - they won't move for 200ms or so), and the drawback of an absolute clusterfuck time rewind debugging madness if any inadvertent mutation of your immutable data happens.
The reason you do rollback with something like this is it gives you zero latency, and you can retrofit it on to an emulator without changing any game code just by using memcpy() on the game state.
Source: I've developed about a dozen titles using rollback networking.
I find this hard to conceptualize/unite with the players view of the game - so if an input arrives out of order the engine can essentially just reapply the new adjusted stream of events to correct itself? From a data modelling perspective that seems fine.
However, in those situations what does the player see in game? IIRC rollback was popularised in fighting games like Street Fighter - so does the player see one "universe" only for that branch to suddenly rewind and replay to an alternate universe where a tiny action happened/does not happen?
That's exactly what happens. If you are writing the game yourself, you can do interpolation to fix things up gradually.
You can also delay significant events such as death until the rollback threshold has been passed, so you don't run in to knife edge situations where, e.g., it looks like you died and your character starts to ragdoll but then you snap back when it turns out you killed the enemy instead.
The key to it not being too disruptive is keeping the maximum rollback threshold fairly low. If you add inputs and your ping is greater than the threshold, they get delayed to a later frame, and your inputs start to feel sluggish (the server would enforce the delay, but you'd also add it client side).
Thank you these types of comments are why I frequent HN! Really insightful, first time I came across rollback I had one of those loving CS/SWE moments. So I'm grateful that you're so obliging to my curiosity!
Out of interest are there any toy projects out there you can point to that can explore the concepts here with no first hand experience with game dev?
Hmm, I haven't come across any, although you can probably dive in and build a prototype system without too much trouble.
My recommendation would probably be to build it without netcode to start (two local clients connected over a virtual pipe), and using a system where you can easily serialize the game state - C with memcpy(), JavaScript reading/writing to json, Clojure or similar. I use C# with compile time generated codes to store data in slots - it's not fun.
While not rollback, the original AOE networking writeup is probably the best I've come across as an introduction to deterministic multiplayer. There's the GGPO framework that you can get off the shelf, but it's pretty heavy weight.
There are some real head scratching moments with debugging rollback, but in general for games that aren't too performance intensive it shines. I actually developed an entire strategy game prototype over a period of three weeks in single player before bothering to test it worked in multiplayer. It did first try. Four days later, it was live in public beta (starjack.io if you're interested, which peaked at around 400 concurrent players).
If I'm understanding this correctly this offloads the copyright issue by making it the users responsibility to upload a rom?
No one really talks about it but the first iPod had the same approach to letting people play their thousands of mp3 files (which I imagine 99% of were obtained illegally). Apple was eventually able to create a legal marketplace for mp3s but early days it was all napster/limewire/kazaa/etc
Obviously Playstation/Nintendo roms are not as prevalent but its an interesting thought?
A ton of movies were reissued on Blu-ray, re-scanned from film. DVD has garbage resolution and compression, while film is made to be projected on gigantic theater canvas. Surely you don't believe movie theaters were effectively displaying 720x480 or 720x576 at the size of canvas?
Personally I find that even BD-rips made in SD resolution look better than DVD—if only for contrast enhancements made for reissues, and due to better compression formats.
Moreover, if the source was shot on 70mm film then people might still keep re-digitizing it in the 22nd century into fresh HD formats of the day.
For anyone watching that video, and are interested in watching more classic racing in incredibly high fidelity, I recommend this collection of clips from the 1966 movie "Grand Prix". It's a proper old-school Hollywood movie.
They entered actual Formula 1 cars into a grand prix, strapped it with cameras, and recorded the whole thing. Other times they added new bodywork to a Formula 2 car and had the actors driving around on track - again with cameras strapped to the car. Being recorded in 70mm it's some of the best footage of on-board Formula 1 racing until well into the 2010s, and the advent of small 4k on-board cameras in modern races.
IIRC (having trouble finding my source), MAS*H was shot on film but in 4:3 AR. It's been released in HD/Blu-ray now by being re-digitized but each scene has been cropped to be 16:9.
There's going to be a dead spot of TV/Movies where things were shot in SD on Video where they will be forever SD but older things, shot on film, can be brought to HD and newer things are either shot on Film or HD video.
I'm curious to see how quickly the things shot in HD Video become outdated by superior resolutions of 4k, 8k and whatever comes after and if there's a plateau of consumer/home resolution.
Star Trek tng was shot on 16:9 film, transferred to 4:3 video to be cut and turned into a final 4:3 video. When they did the blu ray remaster, they had to match and rescan every shot from the source film, re-edit each episode and recreate the special effects (when not produced on film).
There was the thought to maybe scan the material in 16:9, but the scenes and crew only considered the 4:3 frame - in 16:9 there would often be crew or mics in the frame or even perhaps sets missing. So they kept the show in 4:3.
Apparently for later trek shows (ds9, voy) the film crews were more careful to respect the 16:9 frame, but those shows aren’t as popular and may never get the expensive blu ray makeover.
TNG has such careful framing too, even if they didn't have crew in the shot I bet it'd look off in 16:9 anyways.
What would any of these shots look like in 16:9 [0][1][2]? They really liked filling the frame to the brim with TNG, and any side-areas would just look empty.
The quality is impressive, and I've seen this before with older footage which has been re-scanned in higher quality. Still, there's some characteristic about old footage which sort of tells me it was shot a long time ago.
I want to say it's that the footage seems darker, and you can obviously make a case for classic brand logos being a giveaway.
In the comments on YouTube, people discuss some other race-related aspects that reveal the age of the video. Such as lack of protective barriers, and people standing right next to the race track.
I don't know that "oblivious" is the word I'd use. People standing next to the racetrack are well aware of the risks involved. They aren't there because they don't perceive the risk; they're there because they think it's worth it.
Would you describe rock climbers as "oblivious to the risks involved", or just as people who are sometimes killed by their hobby?
For another recent historical example see: smoking.
Personally I'm waiting for when sitting at the desk all day long is found out to be Very Bad. But that's probably just a modern ‘proletariat vs capitalist’ issue—I guess factory assembly lines weren't and aren't too popular either.
I think TV shows tend to be much more of a mixed bag. The Sopranos (1999), The Wire (2002), and Firefly (2002) were all shows at the forefront of the current renaissance in dramatic television— they were all shot on film, but it's interesting that Firefly for example was natively 16:9 and cropped to 4:3 for its original run, whereas The Wire's "conversion" to widescreen has been a matter of some more controversy, eg: https://www.techhive.com/article/2854439/hbo-remastered-the-...
Another interesting aspect of Firefly is that the pure VFX scenes (eg spaceship exteriors) were rendered originally at 480p, so when you watch the Blu-ray version of the show, you can see a bit of an awkward transition where the shots of the actors are gloriously crisp, but the space scenes are clearly upscaled.
Friends was also shot on film, and cropped to 4:3 for broadcast. Like Firefly, Friends also had unintentional things (crew, microphones, un-dressed sets, etc) appear in the edges in 16:9. While most were probably edited out before the uncropped 1080 re-release; some unintended things remain, which I personally really like seeing.
Additionally, like Firefly's pure VFX, some footage in Friends also appears in SD in the HD release (seemingly either because the original film stock was lost, or it wasn't originally shot on film), with the zoomed 4:3 SD vs 16:9 HD sometimes changing mid-scene, and …that too I find a humbling reminder of the quality of yesteryear.
I've been noticing some of the same with The X-Files. There are seemingly random shots with lower quality, and any shot with heavy VFX looks original, but most of it now looks quite sharp.
I remember having a distinctive moment of clarity watching Seinfeld reruns in 16:9 HD. Did they have the foresight that it would be a classic making billions in syndication?
Seinfeld started airing in 1989 a decade before American HDTV so they probably didn't foresee the aspect ratio of televisions switching from 4:3 to 16:9. 35mm film likely gave them more high-end options than the cameras and lenses meant for broadcast. It would also give them more wiggle room if they needed to re-frame a shot in post.
Personally I find most of the 16:9 crops of old shows in syndication quite jarring, especially because in any shot where the formerly extra width doesn't work the editor is forced to vertically crop the original. Here are some screenshots comparing 4:3 and 16:9 Seinfeld and ...eek
So I would ask if maybe the image was cropped for the 16:9 conversion.
Update—here's what Wikipedia says: “Unlike the version used for the DVD, Sony Pictures cropped the top and bottom parts of the frame, while restoring previously cropped images on the sides, from the 35mm film source, to use the entire 16:9 frame.”
35mm stills cameras normally use a 3:2 frame. Television shot on film typically used movie cameras that shot at 1.85:1 and then cropped to 4:3.
But different shows handled the aspect ratios differently in their widescreen re-releases. Seinfeld included slightly more horizontally than the original TV release but still cropped a significant amount from the top and bottom. That 70s Show as one example that was also shot on 35mm film included the whole original 4:3 frame plus extra on the sides.
Could it matter that movie cameras use the same film but vertically? I'd be more inclined to stretch frames horizontally if I was designing a cinema camera, because that means the film run slower.
Maybe movies made post-digital but before 2005, but if you look at (say) the recent cut of apocalypse now looks unbelievably good. Like it was shot yesterday.
This is such a dumb law that I'd think a case of jury nullification would be within the realm of possibility. I had absolutely no idea this was supposed to be illegal.
That does seem somewhat of a rational response. It doesn't really make sense because pirates would provably use free burner/throwaway accounts.
The thing is, it probably comes down to an architecrural flaw on Deezers part that they think would cost more to fix than the actual losses due to piracy.
But I just wonder if Deezer could run into legal trouble because of this... I can't imagine that the artists/labels would want their music available like that but maybe Deezer is "too big to be held accountable" as well?
There are github repos that show how to crack spotifies DRM as well. I don't think it really matters or that any one cares. People pay for spotify because its super convenient in a way that pirated music isn't.
Hmm, they offer a selection of rom you can choose in addition to uploading your own rom. There is JJBA, Medal of Honor, Doom, CTR, etc that you can run without uploading anything.
It's not mainstream, but some of this is going on right now with emulation portables like the Odroid Go Advance and its various knockoffs. The devices are all made in China and some even ship with the SD card preloaded with ROMs, but generally speaking I think it's the same framework of "the device is neutral, choose for yourself what you want to do on it."
I found their NES version[0] just the other day, tried it with a friend, and there was a consistent 0.5-1s delay with button inputs. We both tried hosting, but had same issue and couldn't play. I have to imagine that PS1 emulation would be worse?
But as far as UX, it was incredibly easy get us both into a 2 player game, complete with chat, voice, and video feeds. Their buttons are mapped to keyboard inputs, so Windows users can get a gamepad working with a program like JoyToKey[1].
On the contrary, I don’t think the PSX emulation would be worse for input lag; that’s likely all delay from WebRTC, so it’s probably roughly equivalent.
For that reason it’s pretty amazing the (lack of) input delay you see with Stadia, which I believe is also built on WebRTC — although with one peer (Google) always hosting the video and the other (the user) providing input I’m wondering if there’s some clever way to improve upon this...
One trick that’s not really accessible to the general public: Stadia does some input anticipation and so will actually press the button for you just before it thinks you will press it, thus making it seem like a low-latency button press.
> There have always been arguments showing that free will is an illusion, some based on hard physics, others based on pure logic. Most people agree these arguments are irrefutable, but no one ever really accepts the conclusion.
When I had a big LSD trip, not only did I understand the fact that we don't have free will, I experienced it. It felt as real as my body.
I don't think about free will anymore, I feel like I've already watched the movie.
I don't even know what free will is supposed to be. Either all our decisions are deterministic, or they're due to random quantum phenomena, at some level. Which of those is the good scenario?
Linking it back up to my original question ("what happens if you don't press the button?") - I suspect Google cannot actually know you weren't going to. The outcome is computable but intractable.
Free will is just a level of analysis that assumes the outcome is intractable, and doesn't bother trying to compute it.
Does this only emulate "local" same-screen multiplayer games, or can it also emulate the LAN-style dedicated-screen multiplayer games that supported the PS1's obscure Playstation Link Cable?[0]
Bushido blade 1 used that cable. It would allow two players their own screen in POV mode.
I used to work at a video rental store so I had access to two TVs, two PS1s, this cable, and the space to set it up. It's a lot of setup when you could just play the game normally on one screen. Still neat.
That aside, Bushido Blade to me is still the best fighting game. If they revamped the location damage detection, and redid the graphics, I would rebuy it in a heartbeat. Most people favour the sequel, but to me it lost a lot of the charm and simplicity the first one had.
One hit kills really spice up fighting games for me. Positioning and timing are even more crucial then in other games.
The "juggling" and ten button combos really take the fun out of the genre for me. (nothing against the other games, just not my cup of tea)
I haven't found another like it. HMU with recommendations if I missed a game!
BB1 was the game I actually had in mind. Since you had a video rental store, you might've had a location actually suited for it, but for the rest of us kids, it was a tall order to find another friend who had:
- Another PSX
- Another copy of Bushido Blade
- A spare TV that they could bring over
- A room big enough to have two TVs and consoles setup within a few feet of each other so that the cable could reach.
After taking the time to set all that up, simply playing the game in POV mode felt like a disappointment.
But I agree, Bushido Blade 1 was the superior entry. BB2 fell into the common trap of "fix up the kinks and 'polish' the sequel, but lose what made the original feel special along the way". The other thing that most people don't know is that while each character can technically wield any of the weapons, everyone has a preferred weapon, and wielding it unlocks unique moves for that character. This wasn't mentioned in the manual, nor in any of the online FAQs back in the day, so unless you had shelled out for the Strategy Guide, that whole dimension of combat was unknown to most people.
The original developer behind Bushido Blade went on to make the "Kengo: Master of Bushido" series, which was something of a spiritual successor, but IMO all of them were garbage. The closest thing it ever got in my mind was the original Way of the Samurai for the PS2 (another game which got a bunch of sequels, all significantly worse than the original).
the way it works is that one participant runs the emulator locally in their browser and streams the video and accepts inputs from other room participants.
AFAIUI the legal gray area with PSX emulation (and many other consoles) wasn't limited to just the ROMs, but also the console BIOS. Most emulators would tell you to dump (or find) your own.
Is that still the case, or are there free/libre reimplementations of these consoles' BIOS images out there now?
Seems like they offload the responsibility of ROMs to the user (good) but no mention of the BIOS part.
This is very neat but gamepad mapping can't be configured and the browser default "standard" mapping doesn't work very well (at least with my Xbox one controller).
If you visit chrome://flags you can enable "Restrict gamepad access" which will disable the default controller's mapping.
Click the wrench next to controller 1/2, click in each of the input boxes and press the button on the controller to map it. I had no issues setting up my XB1 controller.
PS1 era fighting games are considered timeless. And fighting games in general are definitely the genre that has eluded easy portability to multiplayer web. If OG Soul Edge and Bushido Blade become viable again via PSX Party, it would indeed be an epic milestone in preservation!
I'm not so sure about this, as most people in the know were hunting down Saturn versions and 4MB/Action Replay carts.
There were a few games where the PSX version was ideal, such as Rival Schools, Battle Arena Toshinden and the mentioned Soul Edge, but that was because the ZiNc/System11 arcade hardware that they ran on was literally derived from the Playstation hardware.
So how does this work? Does the "host" run the emulator and stream the video feed of it and accept inputs, or do both players run the emulator in lockstep or something? The first approach would introduce a delay that's likely to be unacceptable for games. The second one would work better in terms of delay but would require some very tricky state synchronization.
edit: the first approach would also give the host an unfair advantage because they'll have no delay at all.
> would require some very tricky state synchronization
Games for these older consoles — any generation while games were either still unikernels, or still ran on RTOSes — are already 100% deterministic in terms of what the game will do on a given frame, given a fixed history of per-frame inputs. (That might be surprising, but devs would strive to keep this property, as it makes reproducing bugs far easier.) So you don’t have to do much, other than execute the game faithfully and ship button-presses back and forth, to ensure synchronization.
Of course, shipping these button-presses around to achieve state-consensus synchronously would be slow — but there’s no need to do it synchronously. All modern emulators are built not in terms of a single mutable virtual-machine state, but rather in terms of a functional-persistent chain of VM states (think a HAMT.) This is what enables “rewind” support in emulators — and more recently, “run-ahead” latency reduction (basically a type of speculative execution of VM states.)
Thus, with any emulator constructed this way, it’s actually very easy to ship+receive+resolve network inputs asynchronously: i.e. to receive inputs “about” frame N while rendering frame N+M, and then to go back and re-compute the correct VM state for frames N..N+M, such that frame N+M+1 will inherit from the recomputed frame N+M. (Remember, you don’t need to run any of the IO-emulation logic for the recomputed frames, so they’re actually quite cheap to recompute!)
It’s actually oddly similar to what blockchain nodes do to “reorg” when they discover a fork — just done in real-time, between 16.7ms frames.
Interesting so commands are sent to other players async? Doesn’t that mean that some players will see things different than others? I my friend A stabs a monster and my other friend B also stabs a monster but they come in different order I would see A killing monster, but others might see B killing it instead?
Edit: probably the order of commands is kept in order via the chat room server no? But then what is it using WebRTC for? Perhaps the “host” is e source of truth for the order... meaning that all the commands speed and delay depend on the host network bandwidth.
Input events don't need network linearization. Instead, every input event "happens" on an explicit frame. Each network-broadcast input-event message looks like "during frame N, player P {was/wasn't} holding down button B." Nodes can receive and apply such messages in any arbitrary order, and the result will be correct. It's the frame-counter in each message, not the order each message was received, that determines how history will be rewound and altered.
Keep in mind, every node is running the same deterministic simulation. As long as nodes achieve eventual consistency in their "input movie" somehow (a gossip protocol, say), then every node will end up on the same VM state. You don't need a leader to declare what the consensus state is; presuming trustworthy peers(!), there is a single "objectively-correct" consensus state that every node will converge upon once all nodes have all messages.
Every time an input-event message is received from the network, the receiving node just rewinds to frame N, and recomputes it (and all frames after it) with an input-movie where player P {is/isn't} holding button B. (And then, only if that explicit message wasn't already what was predicted during run-ahead.)
Note that this is also exactly what happens when the local node produces an input-event message — that's what run-ahead is!
That's kind of fascinating. Would you suggest some reading on this? Question: if I send to other 3 players (via gossip say) the event "in frame N, I clicked the B button that causes monster to die" but my network lags and in the mean time they do other events where someone else kills the monster and they see the monster dying from another player event. After a few ms my events gets to them, their state is rewind with the monster dying because of me instead of the other player. In other words, this causes players to see the monster dying twice.
> In other words, this causes players to see the monster dying twice.
The frames that are replayed aren't visibly replayed. They're re-computed, but the visible effect is more of a "lurch" from one world-state to another one. In this case, the monster has been dead for a second or two already, but suddenly each player's score counter would jump around to show that it's actually player P who did it, and gained the points for it. Except for player P themselves, who experiences no change.
It's very similar in experiential effect to the effect when an inherently-networked client-server game does client-side prediction, and then the server sends the canonical state which disagrees from the client's predicted state, and the client has to lurch into the server's canonical state. People's avatars pop into different positions; in an FPS, you might suddenly be dead "out of nowhere"; in a racing game, you might suddenly be spiralling out of the way because your path turned out to collide with that of another car that wasn't originally there; etc.
Inherently-networked games sometimes have interpolation ("lerp"-ing) code to smooth out this lurch between client-side predicted positions and canonical server-side state — usually, if the only thing that's incorrect is positioning, the client will try to generate a few interpolated frames of movement that takes objects from their incorrectly-predicted positions to their server-prescribed positions; and then it'll play those frames in fast-forward, so that they've all been played out well before the server sends its next update.
But this only works when the server isn't communicating every single frame (otherwise there's no time to replay the lerp); and it basically only works for positioning changes — if someone does manage to cause a discrete alteration to the game-state (like changing who killed an enemy), that will still usually cause a sudden "lurch" into the new game-state in these engines. The interpolation-frame generator code just gives up.
(Though, for especially-important things like who won a match, inherently-networked games just do a hard consensus-sync of all players — usually during a screen transition — before actually showing a results screen. People get quite cross when their results lurch!)
And, of course, games that weren't inherently-networked, but which are instead being networked "on the emulation layer", have no logic for lerping, and cannot, unless lerping logic is hand-written by some kind soul for each emulated game's engine. Possible if you're hand-crafting an emulator designed to play a single game; quite unlikely if you're just building a general system emulator — especially if, as with most authors of general-system emulators, your goal is faithful reproduction of the behavior of the original system.
> Would you suggest some reading on this?
No idea if there is any reading on how emulators do this, tbh. (I'm self-taught on this subject, from working with open-source emulator codebases.) But any modern textbook on [inherently-]networked client-server game-engine development should talk about client-side run-ahead prediction and interpolation-frame generation.
I believe there's some book which deep-dives into the network architecture of Doom, or Quake, or Unreal Tournament, or one of those early networked games, and uses it to explain the history/invention of these client-side prediction features. I can't seem to Google it for the life of me, though. Can anyone here assist?
> the receiving node just rewinds to frame N, and recomputes it (and all frames after it)
What about situation where frame N is a “long” time ago (for some definition of “long”)? There must be some sort of threshold after which the game state cannot be changed even if an input was received.
There usually is, though it's arbitrary — basically an implementation detail of the emulator which was tuned based on in-practice behavior observation. Input-event messages that haven't been integrated into the canonical state after a "long time" must be dropped/undone by those who did integrate them, including by the player who originally generated them. In effect, an implicit "anti-event" is generated saying the opposite of the original input-event message, and applied to history by all players who applied the original message.
How do players tell if their event managed to become part of the consensus state? Depends on the networking protocol. In a gossip protocol, messages get sent to peers individually, and ACKed by those peers. In an elected-leader client-server protocol, as long as the server peer ACKs the message as "in the queue", it's canonical. In an IRC-like spanning-tree message-relay system†... something complicated involving vector clocks containing message IDs.
† I don't think I've ever seen an emulator choose a hierarchical message-relay architecture. It makes good robustness+efficiency trade-offs, but it's just far too complex to work with compared to the alternatives. It's more the domain of MMOs, that need to be concerned with many nodes colocated in one site speaking to many nodes colocated at other sites.
I doubt it's what's being done in this case but an interesting take on this topic is "rollback netcode" which AFAIK was originally used for playing emulated arcade games over the internet.
The idea is that the host guesses what the other player's inputs are on a certain frame, and when the actual input then arrives the host emulator will roll back its state to that frame and then re-emulate the frames after it, if it guessed wrong. If it guessed right the game continues as usual.
It's a good fit for emulation since "rolling back" in an emulator is usually easier than rolling back the state in a game that wasn't built with this type of netcode in mind, but it has also become a popular type of netcode for modern fighting games.
Great idea, but the performance is very poor for me. The games skip frames all the time, almost like a slideshow. I've only tested it locally, by the way.
(Windows 10, Chrome, i7-7700, GTX 1050.)
What emulator did you use for this? (I would suggest using DuckStation, it's a million times faster than any other and more compatible than most of them, only Mednafen may work better in a few games.)
Replying to myself, since I bet the author won't, because they're probably violating GPL.
For anyone else interested, it definitely looks like it's using MAME, which is unfortunate, since it's one of the slowest and less accurate PSX emulators out there.
In any case, I'm sure the author will soon release the source code of all the GPL parts, if any ;)
SNES party helped my partner and I get through the Covid quarantine apart from each other. I'm really happy to see efforts like this.
Hopefully the author open sources this so that the community can contribute things like a more polished UI and save state management system. Seems like the kind of project that makes sense to be open-sourced.
Also, aside from the obvious AV chat systems (eg: Jitsi. Zoom, etc.) 'cloud-gaming' platforms such as Google Stadia and Shadow use it.
I've been monitoring and experimenting with webRTC for a few years now and one of the most exciting recent developments has been more or less full support in desktop and mobile browsers (even Safari, which dragged its heels for years)
There are a lot of use cases for WebRTC. Video calls are the biggest one right now. But anything you can imagine that needs low-latency (<200ms) video or audio is something you can potentially do with WebRTC in a browser, these days. There's a huge amount of interesting stuff going on.
WebRTC is fundamentally peer-to-peer. But scaling up to larger calls, and doing things like recording, generally requires routing the audio/video through media servers that selectively forward, process, or transcode the media streams.
<shameless-plug> I cofounded a company that makes it easy to get started with and scale WebRTC-based applications and features. [0] </shameless-plug> In addition to basic video calls, we're seeing rapid growth of online classes, games, fitness applications, live collaboration in productivity apps, e-commerce and customer support, and IoT and robotics streaming video.
webrtc is great for sending data as fast as possible to someone else, unless you need to synchronise data between multiple parties
Zoom doesn't use webrtc (unless you count the browser fallback version which nobody uses), they use their own video codec - which notably uses a lot of cpu but requires little bandwidth.
From my experience, zoom is the only solution where I don't experience bandwidth problems all the times.
Jitsi is very, very impressive. But their whole platform is also bound up with a lot of other tech in the stack. (jitsi-meet, jitsi-videobridge, jicofo and 'Prosody' (XMPP).
You hint at this being a downside, but (speaking as a Prosody developer) this is one of the things that makes Jitsi extremely powerful and flexible.
For example, Jitsi delegates authentication of users to Prosody. Prosody already has a bunch of authentication backends ( https://modules.prosody.im/type_auth ) and Jitsi benefits from these without needing to reinvent the wheel.
I've seen the Jitsi+Prosody combo integrated into many kinds of platforms. People also implement things like custom access control, logging, notifications and provisioning at this layer. These things would generally be harder to customize in a monolithic off-the-shelf system.
While the docs you point to are quite easy to follow, most of the "going further" is really not well documented...
But we are indeed integrating Jitsi into a game experience
here and its open-source stack is really nice to deal with (after many days needed to really understand it).
I'd say many to many is possible for more than 2 participants, but it does get complicated, being a mesh and all, so you might opt for using an SFU to untangle that mess, but it has it's own disadvantages too.
Not at the moment, he's moved on from it and is now working on making a GBA version of a DDR-like game, but you can try approaching him, check his website out https://r-labs.io/
As someone working on real time streaming, I find this both interesting and impressive, but I also cannot shake the feeling that WebRTC will be too slow for most games.
At least in my experience, WebRTC even when using UDP was a lot slower than custom protocols, mostly because it responds badly to packet loss.
If I use plain UDP in c++, I can layer on forward error correction like Solomon Reed and just ignore lost packets up to a percentage. That, alongside with low level control of the transmission timings seems to be the difference between 60ms peak latency with WebRTC and 5-8ms in C++. Or noticeable input lag @60fps vs. exactly one frame.
Yes, those protocols make a huge difference. Also, high throughput low loss UDP relies on ms-level timing control, which is all but impossible in Javascript.
This is pretty cool. I’m always looking for new .io games to play with friends online, and it’s nice to see more ‘old games’ being put online. Kind of reminds me of Pokémon Showdown.
Who are the developer of this? This looks like a great service to people who want to try out the old game that they have been playing when they were young. For example, I always have a urge to play the old civ3. But it was always a hassle to start a Windows machine and open steam and launch the game.
Emulators are very impressive! The biggest innovation here is running the emulator in browser (presumably a WASM cross-compilation of an existing desktop emulator?).
Dolphin has had a similar (non-browser) feature called Netplay for many years, and it works incredibly well.
hauxir, nice job!
how is P2P implemented? I see mediasoup-client SDK in use, but that does not support P2P.
How many participants are supported by the app? WebRTC requires to re-encode each video stream, so there is a CPU limit for how many peers one can talk to.
I noticed there is some kind of latency in ms next to each participant. How is that measured - time to the next peer or time to the presenter?
Use off-shelf-chatrooms-software to create rooms but instead of chatting you send ROM game commands to each other and allow for multiplayer gaming!