Thing is, I've helped ship my share of GCs in games, and they're not always a problem.
UI solutions often/usually use some kind of GCed language for scripting. But the scale and scope of what UIs are dealing with are small enough that we don't see 100ms GC spikes.
Our tools and editors leverage GCed languages a lot - python, C#, lua, you name it - and as long as their logic isn't spilling into the core per-frame per-entity loops, the result is usually tollerable if not outright perfectly fine. We can afford enough RAM for our dev machines that some excess uncollected data isn't a problem.
And with the right devs you can ship a Unity title without GC related hitching. https://docs.unity3d.com/Manual/UnderstandingAutomaticMemory... references GC pauses in the 5-7ms range for heap sizes in the 200KB-1MB range for the iPhone 3 [1]. That's monstrously expensive - 1/3rd of my frame budget in one go at 60fps, when I frequently go after spikes as small as 1ms for optimization when they cause me to miss vsync - but possibly managable, especially if the game is more GPU-bound than CPU-bound. It certainly helps that Unity actually has some decent tools for figuring out what's going on with your GC, and that C# has value types you can use to reduce GC pressure for bulk data much more easily.
[1] Okay, these numbers are pretty clearly well out of date if we're talking about the iPhone 3, so take those numbers with a giant grain of salt, but at the same time they sound waaay more accurate than the <1ms for 18GB numbers I'm hearing elsewhere in the thread, based on more recent personal experience.
Right, well-contained GC in non-critical threads is not a problem. Once you get it trying to scan all of memory, your caches get corrupted, and only global measurements can tell you what the performance impact really is.
GC enthusiasts always lie about their performance impact, almost always unwittingly, because their measurements only tell them about time actually spent by the garbage collector, and not about its impact on the rest of the system. But their inability to measure actual impact should not inspire confidence in their numbers.
UI solutions often/usually use some kind of GCed language for scripting. But the scale and scope of what UIs are dealing with are small enough that we don't see 100ms GC spikes.
Our tools and editors leverage GCed languages a lot - python, C#, lua, you name it - and as long as their logic isn't spilling into the core per-frame per-entity loops, the result is usually tollerable if not outright perfectly fine. We can afford enough RAM for our dev machines that some excess uncollected data isn't a problem.
And with the right devs you can ship a Unity title without GC related hitching. https://docs.unity3d.com/Manual/UnderstandingAutomaticMemory... references GC pauses in the 5-7ms range for heap sizes in the 200KB-1MB range for the iPhone 3 [1]. That's monstrously expensive - 1/3rd of my frame budget in one go at 60fps, when I frequently go after spikes as small as 1ms for optimization when they cause me to miss vsync - but possibly managable, especially if the game is more GPU-bound than CPU-bound. It certainly helps that Unity actually has some decent tools for figuring out what's going on with your GC, and that C# has value types you can use to reduce GC pressure for bulk data much more easily.
[1] Okay, these numbers are pretty clearly well out of date if we're talking about the iPhone 3, so take those numbers with a giant grain of salt, but at the same time they sound waaay more accurate than the <1ms for 18GB numbers I'm hearing elsewhere in the thread, based on more recent personal experience.