> Abstractions like RAII, constructors and destructors, polymorphism, and exceptions were invented with the intention of solving problems that game programmers don’t have
This is the only part of the Jai philosophy I struggle to understand. How is an explicit delete keyword better than deterministic destruction? From my perspective, the former method reintroduces the problem the latter method solves.
Also, polymorphism is bad for games now? I absolutely cringe when I see abstract base classes, virtual functions and classical inheritance, outside of a generic interface in 2016. However, games might be the one case I would strongly consider using those features as part of my design. For example, a Final Fantasy-style job system seems to elegantly lend itself to the "Intro to OOP with C++98" style of runtime polymorphism*
As for exceptions, well I think the reputation of exceptions is one of the great tragedies of C++. Rust, Go, and other new languages seem to have decided that regressing to pervasive error checking boilerplate is more elegant than to have exception handlers.
I think Swift got it right with scoped exceptions and a defer statement - although, they selectively use "error handling" as double speak to avoid saying the e-word.
*Although, I personally think interface/protocol based solutions have proven to be the cleanest way to tackle polymorphism -- this is possibly true even in the case of an rpg with class inherentence as a game mechanic.
>> This is the only part of the Jai philosophy I struggle to understand. How is an explicit delete keyword better than deterministic destruction? From my perspective, the former method reintroduces the problem the latter method solves.
My interpretation of statements like "problems game programmer's don't have" is that they don't mean game programmers don't run into situations where things like RAII or polymorphism would be useful, just that the stereotypical game programmer doesn't care about using them because they have their own ways to get the same results. Which often involves programming practices that aren't considered safe for most other application domains.
The thing with 'game programmers' when referring to people like Jonathan Blow and e.g. Casey Muratori (who does Handmade Hero), is that they have been writing the same kinds of systems for so long, and have developed so much 'muscle memory' to avoid the pitfalls of their coding style, that they appear to have gotten a blind spot for all the deficiencies in the code they write. It works, it's efficient, and someone with the same mindset could make progress on it despite of the minefields they've created, but it's usually not 'good code'. The Handmade Hero code for example is atrocious if you ask me.
It never ceases to amaze me how people who are so smart, much smarter than myself, fail to acknowledge all the ways in which they could write better code without throwing out any of the goals they've set for themselves (performance, compilation speed, predictability, ...). All things considered, it does not surprise me that so many games ship in a half-broken state.
Listing all the things bad about it would take me the rest of the afternoon, and likely evoke many replies along the lines of "you don't understand why he does it that way, it's supposed to be handmade, so you can't have all the nice things", like last time I commented about the HH code.
The executive summary would be that the code is basically full of unsafe code, pointer chasing, half-hearted argument/input validity checking, no memory management, no abstractions of the higher-level parts of the code, it uses none of the idioms we've learned through years of writing crap code to avoid common mistakes (RAII would be a good example, etc), doesn't use 99% of the language features C++ offers over C. Casey himself dismisses these things as adding unnecessary bloat and/or mental overhead, that they have no benefits for his purpose, and that the way he writes code you don't need them, but I just flat-out disagree with that.
Let me make clear that I'm not saying this because I don't like HH or because there's nothing to be learned from it, just that the results you get from this programming style should not be taken as an example. The quality of the code is immediately apparent if you watch Casey work on it in the streams, almost every line he changes breaks something somewhere else in the code, often even multiple things.
Yeah from what I've seen I've noticed he jumps around a lot putting out fires when he changes anything. Like I said in another reply, he's too far in the camp of C++ being C with classes.
I think the sad thing will be people are learning C/C++ from him and so will try and program anything in these languages like this.
It's gross because it's written in "C with namespaces and overloading"
Preferring #define for constants over constexpr is 100% the result of C++ bigotry.
It's a laughable decision. constexpr gives you the power of typesafe, compile time evaluation of purely functional expressions with type deduction. #define gives you a compile time 1972-style copy-paste
I agree with things like these, I think Casey and other programmers like him are too far in the camp of C++ is just C with classes.
From what I have seen of the code and videos, I think his general structure of programs is off. I'm no advocate of "every function has to be at max N lines" or "every file has to be smaller than N lines". But I think there are issues there, and I don't think it can be defended just because it's a game, where a lot of general practices go out the window.
There are cases where virtual functions lend a lot of flexibility, but for games they are usually a sign of a programmer that doesn't understand how pointer chasing destroys performance. If you look at Ogre (and I think box2d) both architectures could be massively sped up by not using arrays of pointers.
Games (I am talking about AAA here, have no idea about all possible games) are not written in the style that leaves place for constructors/desturctors. The data is better in tables (i.e. arrays) than spread all over the memory and accessible through a pointer network. The reasons being:
a) walking an array is orders of magnitude faster than walking a pointer chain and b) heap allocation wastes more memory
This, in turn, makes no sense for polymorphism since you already have uniform data in the arrays there is no need to do an expensive indirect jump to figure its type.
If you're using C++ you're still using default destructors even if you don't write them. In C, you at least get deterministic destruction for variables and structures on the stack. This isn't something you really opt out. It's there by default.
The parent was wrong about vector though, it won't safely free elements that do not have their own destructors to clean up their own resources. Another good reason to use RAII everywhere. It costs nothing, to encapsulate memory management.
> You could, but why?
Because there's zero overhead and you guarantee to be passively covered in the few edge cases where you do end up needing deallocation. Now you don't have to worry about special handling of edge cases and you have a more general purpose data structure.
A better question is, why not?
It's simply a better designed and more flexible container than a raw array. Other than a bias against C++isms, there's no good reason to avoid these useful features.
Default destructors are fine since they don't produce any code as long as you don't have any real destruction going on.
>There's zero overhead and you guarantee to be covered in the edge cases where you do end up needing deallocation.
If you suddenly find yourself in a position when you need deallocation for something that is not supposed to be dislocated then I'd rather have it fail with as much noise as possible than have it covered. E.g. I prefer game crashing on out of memory rather soon than going on with thrashing the heap till it crashes 8 hours into the soak test due to the heap fragmentation.
>It's simply a better designed and more flexible container than a raw array.
Tastes differ. I ship games myself and almost all programmers I know do the same. I don't know anybody who would agree with this. Just to be clear, I am talking about destructor of an array. Wrapping arrays in structures is fine and everybody does this.
> I'd rather have it fail with as much noise as possible than have it covered...
RAII is orthogonal to contiguous storage in memory. You are not opting into heap fragmentation by moving your "dealloc struct" function from the global namespace to a destructor. It has nothing to do with the the memory layout. It has to do with preventing memory leaks and undefined behavior.
> If you suddenly find yourself in a position when you need deallocation for something that is not supposed to be dislocated then I'd rather have it fail with as much noise as possible than have it covered. E.g. I prefer game crashing on out of memory rather soon than going on with thrashing the heap till it crashes 8 hours into the soak test due to the heap fragmentation.
How would this be different?
> I don't know anybody who would agree with this.
It isn't really something to agree on, in one scenario you have options for automation but don't give up anything, in the other scenario you have no ability to use ownership or scope semantics whether you want to or not.
In one case it takes 1-15 minutes to reproduce, in other - 8 hours.
>It isn't really something to agree on, in one scenario you have options for automation but don't give up anything, in the other scenario you have no ability to use ownership or scope semantics whether you want to or not.
Judging by your previous question I figure you don't ship games, do you?
std::array<int,ARR_SIZE> is better than int arr[ARR_SIZE]
I don't know how you could disagree with that after looking at the facts. From what you're saying here, there seems to be a culture that favors "old school" C programming in games, but the reasons behind it seem like nothing more than a fear of the unknown. I don't mean to be disrespectful, it just seems like nothing but stubbornness to me.
> Just to be clear, I am talking about destructor of an array
>std::array<int,ARR_SIZE> is better than int arr[ARR_SIZE]
I don't know how you could disagree with that...
What is the alignment of your std::array? What is the memory type (e.g. can the GPU read from it at all? Can it write? What are cache policies?). The alternative though is not a C array, it's an explicit memory mapping.
>Can you be clear about why this is bad?
Useless code at best (if your game runs properly it will never be deallocated by your code), obscuring bugs at worst (if it starts deallocating at runtime it will take longer to fail).
> What is the alignment of your std::array? What is the memory type (e.g. can the GPU read from it at all? Can it write? What are cache policies?). The alternative though is not a C array, it's an explicit memory mapping.
Guaranteed to be contiguous, and semantically equivalent to a C array in all cases.
If you don't trust your vendor's STL implementation take a look at the intrusive containers in EA's STL implementation. It's very very good for games. It's also safe, which is a good thing that doesn't obscure bugs it all.
> e.g. can the GPU read from it at all? Can it write? What are cache policies?
Fun fact, you can write your own template container with specific features with no additional overhead from a C "array" that's also memory safe.
> if it starts deallocating at runtime it will take longer to fail).
It can't magically deallocate at runtime. It's deterministic. I don't think you understand you give up zero control. It's just a cleaner system with less room for human error.
> obscuring bugs
What's obscure about knowing exactly where are all memory management occurs without exception? C-style malloc and free scattered all over the project is way more prone to hiding bugs.
As I said, the alternative is not a C array.
Neither is C style malloc and free are used in games. If you want a discussion - argue over what I've said or ask questions if you don't understand something. Otherwise have fun with your own mental image of game programming yourself.
You asked a question about the array and I answered it.
Yeah I missed the part where you said mmap. There's libraries that make that safer, but clearly there's a preference for working with the raw tools here.
> Otherwise have fun with your own mental image of game programming yourself.
Ignorance is bliss. Have fun ignoring the progress systems programming has made in the last 30 years. Why bother even looking into it right? If what works for you works... that's all that matters.
Disregarding the fact that std::array is allocated on the stack, you do realize that running under a debug mode means that there can be bounds checking assertions built into containers like this, not to mention iteration of the elements instead of iteration of the indices (which guarantees not going out of bounds) ?
std::array is not necessarily allocated on the stack, it's only allocated on the stack when it's a local variable. So I realize what it is and what it does. Do you realize that you have little control over where its memory goes and there are very different types of memory available to games? Do you know what is memory alignment? Do you realize you cannot grow/shrink it? Do you realize you can still do bounds checks if you need them?
If you need to grow or shrink it use a vector. If you need different types of memory or aligned memory, make an allocator and use that. Many people do, it is a very common use of allocators. Even if you don't want to use the STL you can encapsulate all of these things for reuse and modularity.
I'm not exactly sure why you think these things aren't achievable in C++ (and because they are achievable they are relatively straightforward to wrap in a way that they can be made generic while hiding the complexity so you can be done with it). I've even made variadic templates that fuse memory allocations together like Jai's proposed feature.
I've seen people who know C and seem forever hung up on it. It isn't really rationale these days now that there are C++11 compilers that are so mature. It's almost as if there are people who work in a constantly advancing field but don't want to learn anything new.
Thank you for the advise but what if I want to shrink one array that takes 200 MB of memory by 10MB and give it to another array that takes 50 MB of memory on a system with only 256 MB of memory?
> If you need different types of memory or aligned memory, make an allocator and use that.
std::array does not have allocators.
>I'm not exactly sure why you think these things aren't achievable in C++
I have no idea why you think so. I only used C++, assembly and various shader languages in every game I worked on. I did some C in drivers but I don't think it's a good language for games.
I've lost track of your point all together, are you still trying to say there is utility in raw arrays?
You can't possibly think memory allocation problems that are solved by custom allocators can be dismissed because std::array doesn't take an allocator when that it integral to the entire reason it exists. Are you trying to say that not only do you want aligned static memory but that there is nothing that exists that helps over a raw C array?
Here is a digest of the thread you had been replying to: gregstula said that std::array is better than what games use (imagining games use plain C arrays). I corrected them, saying that games use explicitly mapped memory and pointed at issues with std::array that prevents its usage in games. I never advocated use of raw arrays in this thread though they have utility since they are easily substituted with pointers if/when you decide that you care about the underlying memory and this is why I've never seen std::array in a game code. Note that I have not seen every game in existence, only few dozen AAA titles or so. I know people write indie/mobile/social games with stl, python and what-not but I am not really interested in that kind of game programming.
Hey there. I wrote the Jai Primer. Ideas there are my best interpretation of Blows ideas, which I generally agree with, but bear in mind they're not his.
To quote Joe Armstrong: You wanted a banana but you got the gorilla holding the banana and the whole jungle. You wanted a way to delete memory automatically, which sounds great but in practice most language's approaches to solving this problem come with a host of other problems that at scale make the given solution not worth it. RAII solves a big problem but it introduces a bunch of tiny problems like big mysterious constructors that implicitly do a lot of work and deconstructors that don't map to any particular line of code other than an ending brace. It's tough to examine in a debugger, it's tougher to reason about when it gets to nontrivial scale, etc etc. But human brains naturally weight a lot of small pains to be less bad than one big pain, so the solution looks legit.
Polymorphism is a way of modeling the world that runs along the lines of the categorizations that people tend to make, so it feels natural to create deep class hierarchies. But in practice it doesn't match the problem that you need to solve when you make games - you have data in state A and you need to get it into state B. Example: to do a physics integration on each simulated body, the CPU wants to do a for loop over a list of position vectors. But those vectors have been scattered all over memory by the class hierarchy. So that's one problem, you also get problems like needing RTTI and casting and yadeya. Class structures have largely fallen out of vogue in game development in favor of component systems, which is a pretty good step forward.
This is interesting. Recently, I've been doing more c programming again, and everytime I do I find myself thinking about building a language that is essentially c with just a very few pain points removed. #1 is a build/module/package system for making it easier to build modular code without copy/paste that is the defacto c-standard. #2 would be removing the pain points of utf8/unicode support (go seems to get this pretty close to right). And that's really it. (you could throw in an easy way to do closures and all values immutable by default and I wouldn't mind either).
Otherwise, I'm usually completely happy using C. I've tried Rust and Go and they have their use cases, but for most low-level things I do, I don't care that much about everything being perfectly safe (i.e. rust), and I don't want gc, and the high level stuff that Go brings. Really, just give me C with a good module system and I'd be perfectly happy. Maybe I'm alone here though...
One language to look at for reference that most people haven't heard of is Clay. It isn't kept up any more, but it's aim was to be a modern generic C. It made use of very clean generic programming with move semantics and no garbage collection. The person who wrote it was using it as a substitution for C at his work already.
I remember Clay, it was mentioned on Reddit for the first time around the same time I first heard about Rust (back when Rust was a very different beast). I though Clay was very interesting, it's a shame it is more or less abandoned.
There are a lot of languages which are "dead" but are useful to look at when you're designing a new one. Clay is certainly one of those if you're looking at building a low-level language.
Use of GC in Nim is up to you -- you can malloc/free as you please if you want and avoid it altogether. I'm not sure to what extent you can/can't make use of the standard library if you avoid GC, but even if you lessen your restrictions and use it a little bit, it's a highly tuneable/controllable and understandable GC including support for swapping out entirely different GC algorithms.
Professional game development as in AAA studios? Yeah, probably not, but more for the tooling alone than whatever language any particular studio happens to use. Professional indie game development? It's ready, even though it's not at 1.0 yet. Indie devs deal with big breaking upgrades to their frameworks (Unity, Corona, etc.) all the time, it's annoying but not a deal-breaker if the language gets those sometimes too.
I look at all these new languages with horrificly complicated syntax and wish for s-expressions. Lisp is perfectly well suited for game development, too, and not just for scripting. There are many implementations around with fast optimizing compilers, JIT compilers, and other modern features that make things run fast. When I see languages like what Jonathan Blow made, I think that most of the features can be implemented as extensions to your Lisp of choice.
The one thing that keeps me away from Lisp/Scheme is the lack of built-in syntax for hashmaps and sets (I like Clojure's syntax, but don't want the JVM).
I've never gotten the hang of car/cdr and dotted pairs.
I don't think this is a particularly large problem. Traditional hash tables are imperative data structures, and Lisp code (or Scheme code, at least) typically does not use them because of this. Association lists, which can be represented as literals, are persistent and provider faster lookup, despite being O(n), for the cases in which hash literals are typically used (small number of pairs). The Clojure language has built-in persistent hash tables and sets, so it makes sense for them to have a reader that can process them. I really don't think this is a deal breaker though, when you can just do stuff like this:
> Traditional hash tables are imperative data structures, and Lisp code (or Scheme code, at least) typically does not use them because of this.
I don't understand that.
> Association lists, which can be represented as literals, are persistent and provider faster lookup, despite being {snip}
Those look like like they could be useful, but I don't see how they can replace hashmaps --- for one thing, they allow duplicate "keys" (the first element of each pair).
I'd use a lisp, and don't mind the lisp/parens syntax, but to be useful for me it must provide easy access to and use of hashmaps and sets.
Introducing state into programs makes them harder to reason about, thus Scheme programmers generally discourage the use of mutable data structures when a persistent data structure would have worked.
>Those look like like they could be useful, but I don't see how they can replace hashmaps --- for one thing, they allow duplicate "keys" (the first element of each pair).
The way alists are used, only the first pair to contain the desired key is considered. Thus, you can "overwrite" a key/value pair by consing a new pair onto the head of the list.
>I'd use a lisp, and don't mind the lisp/parens syntax, but to be useful for me it must provide easy access to and use of hashmaps and sets.
I had the same initial complaints about the lack of reader syntax for hash table. It's a common complaint, actually. However, I found that as I learned more about how to write Scheme, I stopped using hash tables in any place where I used to want literal syntax for them. I learned that people reach for mutable hash tables far too frequently when there are better options available.
What languages even have literal syntax for sets? I can't think of any, but I'd like to know.
Reader syntax varies in each language, but I hope you can see that this really isn't a big problem at all.
> > > Traditional hash tables are imperative data structures, and Lisp code (or Scheme code, at least) typically does not use them because of this. Association lists, which can be represented as literals, are persistent and {snip}
> > I don't understand that.
> Introducing state into programs makes them harder to reason about, thus Scheme programmers generally discourage the use of mutable data structures when a persistent data structure would have worked.
Oh, maybe you typoed and meant to say that traditional hashtables are mutable data structures? In that case, I see what you mean; in Clojure, hashmaps, vectors, sets, and lists are all immutable and persistent. In Scheme, which data structures are persistent, or mutable/immutable?
> The way alists are used, only the first pair to contain the desired key is considered.
Ok, I see now. The function creates a hashmap from an alist.
> I learned that people reach for mutable hash tables far too frequently when there are better options available.
{raises hand} They're very easy to work with. In Clojure, I found myself doing extra work to work around the immutability. I'm sure it's a benefit for larger and multithreaded programs, but mine were neither.
> What languages even have literal syntax for sets? I can't think of any, but I'd like to know.
Clojure and Python. I suppose I could live without literal set syntax, but hashmap/hashtable syntax is extremely handy.
Clojure has syntax for sets. I really like Clojure's data literals. I even think there's a Common Lisp package (or two) out there implementing reader macros to allow Clojure's syntax, but I don't think it's popular among Lispers.
When we look at Lisp we see archaic user interfaces, legacy keywords such as car, cdr and cons which bear no meaning to us mere mortals. Also no clear consensus on what extensions to use. Why are there so many dialects? Can you not agree on something that works? Where is your IDE with error underlining and autocomplete list that comes up with each keystroke? And finally s-expressions, which make you twist your mind in order to write and don't give clear structure of the code meaning to the reader, and not even considering macros...
Not trying to start an argument, but that's my view from a C#'er who tried Lisp a while ago. I often see Lisp talked about on HN as though it's the solution to all our problems, but it's really not. Syntax is a big deal in programming languages, with lots of trade offs between human readability and unambiguous parsing by the computer, and s-exps aren't some magic bullet for this.
Well I think Lisp's biggest strength is also its biggest weakness: macros.
By being able to bend a Lisp to your will, you trade your ability to standardize the language and build a community of libraries and tools around it. That being said, there's nothing like having a language which eventually becomes the best tool for solving the problem you're facing for at hand (as your Lisp will tend to evolve appropriately).
Since macros are written in a standard language and many cab be written in a portable way, it just another way of meta programming.
Stuff like DEFCLASS, DEFMETHOD, ITERATE, WITH-GENSYMS, etc. started as portable libraries.
What one has to learn: the language does not have a fixed amount of syntax. That's the price to pay: learning a new level of programming: linguistic meta programming.
>When we look at Lisp we see archaic user interfaces, legacy keywords such as car, cdr and cons which bear no meaning to us mere mortals. Also no clear consensus on what extensions to use.
Car, cdr, and cons take very little time to understand, but yes their meaning is steeped in history. "car" returns the first element of a pair, "cdr" (pronounced like "could-er") returns the second element of a pair, and "cons" creates pairs. Not that bad. However, today's Lisps put more emphasis on using pattern matchers than manually car/cdring down lists.
>Why are there so many dialects?
That's like asking "Why are there so many ALGOL derivatives?" Lisp classifies a family of languages, not a single language. It's like saying something has C-like syntax.
>Can you not agree on something that works?
Given the above misunderstanding, this question is no longer relevant.
>Where is your IDE with error underlining and autocomplete list that comes up with each keystroke?
A lot of Lisp hackers use Emacs with a few extensions. I am a Guile Scheme user, so I can only speak for my setup: Emacs, Paredit, and Geiser. Paredit provides efficient, and powerful structured editing support for s-expressions. Geiser provides REPL integration. Geiser can autocomplete symbols, jump to the definition points of variables, display documentation for a procedure or macro, show the values of variables under the cursor in the modeline, show function signatures in the modeline, and allow instant evaluation of arbitrary expressions with a simple keystroke (including jumping to the debugger when things go wrong), and probably some other things that I'm forgetting. It is the nicest development environment I have ever used. Common Lisp users also like Emacs and Paredit, but they typically use SLIME for REPL integration.
> And finally s-expressions, which make you twist your mind in order to write and don't give clear structure of the code meaning to the reader, and not even considering macros...
We'll have to disagree here. S-expressions are very nice, once you get used to them, which can be a bit difficult if your background is using C-like languages with infix notation. They remove complexity, make the language more regular (the operator is always the first element of a list, things that are operators in some languages like +, -, etc. are normal procedures that can be used as values), and allow for syntactic abstraction. Not only can you quote arbitrary expressions like '(+ 1 1) and manipulate symbols directly, the macro systems available in various Lisps allow for the creation of new syntax which is one of the most useful features a programming language could have. One of Paul Graham's essays talks about "top down, bottom up" design where the "bottom up" part involves defining new syntax when patterns emerge in your problem domain that you'd like to express in a more readable and less redundant way. Lisp allows you to build the language suitable for your problem. You seem to imply that macros are bad, but I think that languages that do not offer a macro system are fundamentally limited in the problems they are suited for solving.
I by no means claim that s-expressions are a magic bullet, but I think you are biased in saying that they are inherently less readable than more complex syntaxes. There's a lot of mystique and misunderstanding around Lisp, and I hope I have cleared up a thing or two.
> S-expressions [...] allow for syntactic abstraction
I never bought this argument, based on my intuition, but now Julia has proved this argument to be invalid. Insisting only S-expressions allow macros is simply intelectually dishonest. Go check how Julia does metaprogramming; all the power of Lisp with none of the wierdness.
You are claiming I made an assertion that I did not make. S-expressions allow for easy syntactic abstraction, as do other homoiconic syntaxes. I did not claim that it was the only possible thing that allows syntactic abstraction. I am aware of Julia's syntax, and I still much prefer the regularity of s-expressions. What you call weirdness, I call a feature.
garbage collection and a language that doesn't run as fast as performance minded C++ are not going to be used in hardcore game programming. You don't have control over the memory allocation or layout. For non native game programming there are many many choices.
This shows a bias that high-level languages do not have native code compilers that can generate faster code than what someone writes in C/C++. This is not true. Some of these compilers can produce better native code because the language lets you write better code in the first place.
I wasn't talking about all high level languages, just LISP. My experience is that people who like a particular language try to rationalize and convince others that there is no downside.
I would love to see and example of LISP being as fast as C++ with multi-threading and cache coherency taken into account, using the same amount of memory, with no pauses from the gc that would affect interactivity. If it hasn't happened in the last half century though, I don't think it's going to happen at all.
There is a price to pay for a better programming language. C++ is fast and crashes fast. Its data structures are at the same time complex and inflexible. Many Lisp dialects are tuned for flexibility and some add a bit of performance to it. C++ comes from a different angle. It provides more performance, but it is fully inflexible.
Still people can do interesting stuff. For example the Crash Bandicoot games for the early Playstation were written in a low-level Lisp dialect with an external compiler implemented in Common Lisp. It ran on a tiny Playstation with cute graphics and sold very well.
> My experience is that people who like a particular language try to rationalize and convince others that there is no downside.
There are always trade-offs. GC is a win because manual memory management is terrible and error prone, but a lose because you have to learn how to tune it to behave the way you need. But no matter the language some things stay the same: learn to anticipate what code your compiler is going to generate, check what the optimizer is doing, check the disassembly, use the profiler. Good GCs and compilers play all the same games with cache locality and such, and some are better than others. I think you are underestimating the advances in compiler design. Maybe you think C/C++ are fine languages, and that's OK, but personally they are the absolute last resort for when I reach limitations of my language's compiler and runtime.
> manual memory management is terrible and error prone,
The choice isn't between C style malloc and GC, it is a matter of how a language handles ownership. Look at move semantics or Rust's ownership. These are deterministic and controllable methods of memory allocation that aren't terrible or error prone.
> I think you are underestimating the advances in compiler design.
I would love to see an example like I said before, but you aren't giving any hard information to back up what you are saying.
It's better than a REPL, that allows you to write a game completely live. I'm not sure what you think is lost since you didn't back up what you are saying with anything.
It is strictly less powerful than a REPL. Claiming that you can't live code a game at a REPL is false, because I do it on a regular basis. When I write games in Scheme, I use a REPL to write the game live, doing all of my editing within Emacs. I do not have to save files for the code to be evaluated, I just press a keystroke to evaluate the specific form I'm interested in. I can evaluate more fine-grained sections of code, not only entire files.
Every now and again when I'm really figuratively toked up I go reread the history of Crash Bandicoot and also dream for more Lisp in the games industry (and every other industry). :) But I think Nim is slightly more realistic.
I like the idea of a language syntax that allows easier refactoring. I think Rust is especially good at this as well, and I'm happy to see that Jai borrows from that.
I liked some things I saw which I've only seen in Pascal / Ada.
Edit: Specifically I thought it was the same as what I've seen in Pascal for arrays[0] where you specify the range (in the case of characters) between x and y. I always thought that was kind of interesting, have yet to really see it implemented in another language afaik.
Do you know why Microsoft rewrote Minecraft in C++?
Games can certainly be successful in spite of performance problems, but why purposefully introduce them in the first place?
In the case of Notch, he used Java because he was most skilled at Java programming. The garbage collection, procedural generation, and multi-platform availability of the JVM allowed him to successfully release a massive hit as a solo indie developer.
At least that's one way to interpret it. Another way is that, had Notch been as skilled in C++ or (these days) Rust he could have made just as good of a cross platform game (even more so on mobile), that didn't concern itself with resource management outside of RAII and reference counting, was way more performant (no pauses), and in similar time frame as a skilled Java developer.
If you have a great idea use your favorite language. No doubt about that. You might be successful in spite of its specific deficiencies. However, if you are seeking a new language for a certain problem domain - like video games - choosing one with a feature that actively fights one of your fundamental goals (consistent framerate is only going to be a decision your users will regret you made.
It's funny that people like to say "users don't care what language it's written in". The fact that 8th graders know the the minecraft's garbage collector is a problem, suggests that using a garbage collected language for soft-realtime video games results in a seriously leaky abstraction.
Minecraft is a special case, as the amount of voxels in memory and the constant loading and deloading of blocks of the map every few seconds, means that memory management is incredibly important to the game. That's why the C++ version was more performant, as John Carmack had full control of both the memory layouts and lifetimes, and could fine tune it all to fit the game.
This is the only reason C++ is still in wide use today, in order for games to limit load times they have to do nasty 'unsafe' tricks (such as loading a block of memory from disk straight into an address space, and just assigning pointers to it), in order to get good performance. But if your game isn't open world (such as minecraft, GTA, Fallout), then using a garbage collected language will be fine (as long it's not stop the world, but runs concurrently).
They offer garbage collected languages on top of their C++ game engines for scripting game logic. Any significant modifications to the engine will need to be done in C++
> Unreal's game-level memory management system uses reflection to implement garbage collection.
Yep. Game-level garbage collected C++. Which is exactly what I said, I just a didn't mention GC on C++ to avoid obscuring my point. I promise you the code at the game-engine-level is manually memory managed C++
BTW: I've written 6 engines for AAA games. 3 of them used reference counting. AAA C++ games use garbage collection all the time. Maybe yours didn't. Mine did. Unreal does. So does Unity
That's like saying World of Warcarft uses garbage collection because it uses Lua. The higher level language on top of an engine isn't what was being talked about since Minecraft was written completely in Java and thus dealt with the garbage collector at every level.
I'm with you on this. I'm an old school gamedev. Shipped games on Atari 800, C64, NES, and most system since then. I hated garbage collection for the longest time.
But, I have to acknowledge plenty of games are shipping with garbage collection. Unreal has garbage collection. Unity games have garbage collection. They seem to all be running just fine and there's plenty of large AAA games or close to AAA games made with both engines.
They do not have garbage collection. You're conflating game development with game engine development. When a big powerful, flexible, C++ engine has been written for you, you can use C# or JavaScript or BluePrints to use the engine to make a game.
Unity has garbage collection at the userland/scripting level. This is markedly different from garbage collection at, say, the level of the objects that marshal and keep tabs on your OpenGL resources. (Not least because--though AFAIK Unity does not do this--you can stop the world for GC only at the scripting layer rather than taking out your rendering, sound output, etc. as a side effect.)
This is the only part of the Jai philosophy I struggle to understand. How is an explicit delete keyword better than deterministic destruction? From my perspective, the former method reintroduces the problem the latter method solves.
Also, polymorphism is bad for games now? I absolutely cringe when I see abstract base classes, virtual functions and classical inheritance, outside of a generic interface in 2016. However, games might be the one case I would strongly consider using those features as part of my design. For example, a Final Fantasy-style job system seems to elegantly lend itself to the "Intro to OOP with C++98" style of runtime polymorphism*
As for exceptions, well I think the reputation of exceptions is one of the great tragedies of C++. Rust, Go, and other new languages seem to have decided that regressing to pervasive error checking boilerplate is more elegant than to have exception handlers.
I think Swift got it right with scoped exceptions and a defer statement - although, they selectively use "error handling" as double speak to avoid saying the e-word.
*Although, I personally think interface/protocol based solutions have proven to be the cleanest way to tackle polymorphism -- this is possibly true even in the case of an rpg with class inherentence as a game mechanic.