Hacker Newsnew | past | comments | ask | show | jobs | submit | mh2266's commentslogin

The section "The Problem of the Individual-Element Mindset" bugs me quite a bit, the core of it being:

> This architectural mindset does lead to loads of problems as a project scales. Unfortunately, a lot of people never move past this point on their journey as a programmer. Sometimes they do not move past this point as they only program in a language with automatic memory management (e.g. garbage collection or automatic reference counting), and when you are in such a language, you pretty much never think about these aspects as much.

Billions of dollars worth of useful software has been shipped in languages with garbage collection or ARC: roughly the entire Android (JVM) and iOS (ARC) application ecosystems, massively successful websites built on top of JVM languages, Python (Instagram etc.), PHP (Wikipedia, Facebook, ...).

In game development specifically, since there's a Casey Muratori video linked here, we have the entire Unity engine set of games written in garbage-collected C#, including a freaking BAFTA winner in Outer Wilds. Casey, meanwhile, has worked on a low-level game development video series for a decade and... never actually shipped a game?


> Casey, meanwhile, has worked on a low-level game development video series for a decade and... never actually shipped a game?

He worked with Jonathan Blow on "The Witness"[0].

As the developer of the "Bink 2" video codec[1] and the animation tool "Granny 3d"[2], his code powers thousands of games.

[0] https://store.steampowered.com/app/210970/The_Witness/, [1] https://www.radgametools.com/bnkmain.htm, [2] https://www.radgametools.com/granny.html


I don't think Casey has ever claimed to be the developer of Bink 2. He usually brings it up when explaining the kind of work that was performed at Rad Game Tools, then explicitly states that his work was largely on Granny 3d in particular.

"Bink [...] was written and researched by [...] mostly Fabian Giesen, Casey Muratori and Jeff Roberts.", Bink Video Credits

https://www.radgametools.com/binkhcrd.htm


> Muratori ... never actually shipped a game?

Ahh, I was just thinking about this morning.

Remember the Muratori v Uncle Bob debate? Back then the ad-hominems were flying left and right, with Muratori being the crowd favourite (a real programmer) compared to Uncle Bob (who allegedly didn't write software).

Then a few months ago Muratori gave a really interesting multi-hour talk on the history of OOP (including plenty of well-thought out criticism). I liked the talk, so I fully expected a bunch of "real programmers" to shoot that talk down as academic nonsense.

Anyway, looks like Muratori is right on schedule to graduate from programmer to non-programmer.


> the entire Unity engine set of games written in garbage-collected C#, including a freaking BAFTA winner in Outer Wilds.

Some of those games (though not all of them, unfortunately) try to work around C#'s garbage collector for performance reasons using essentially adhoc memory allocators via object pools and similar approaches. This is probably what this part...

--- And if you ever do think about these, it’s usually because you are trying to do something performance-oriented and you have to pretend you are managing your own memory. It is common for many games that have been written with garbage collected languages to try to get around the inadequacies of not being able to manage your own memory ---

...is referring to.

> Casey, meanwhile, has worked on a low-level game development video series for a decade and... never actually shipped a game?

These videos are all around 90-120 minutes long, each posted with gaps between them since the previous (my guess is whenever Casey had time) and the purpose and content of these videos is pedagogical so he spends time explaining what he does - they aren't just screencasts of someone writing code.

If you combine the videos and assuming someone works on it 6h/day with workdays alone it'd take around 8-9 months to write whatever is written there but this also ignores the amount of time spent on explanations (which is the main focus of the videos).

So it is very misleading to use the series as some sort of measure for what it'd take Casey (or anyone else, really) to make a game using "low level" development.


Unity isn't written in C#, it's C++. C# is used as the scripting engine.

Yes, of course, but the actual games are written (primarily! sometimes they will optimize something when actually necessary with a lower-level language! that's great!) in the scripting language.

The OP implies heavily that writing a program in a language with anything but pure manual memory management makes you lesser as a programmer than him: "Unfortunately, a lot of people never move past this point on their journey as a programmer" implies he has moved further on in his "journey" than those that dare to use a language with GC.

(and with respect to C++ note that OP considers RAII to be deficient in the same way as GC and ARC)


Indeed to the extent that Casey has a point here (which sure, I think in its original context it was fine, it's just unfortunate if you mistook a clever quip at a party for a life philosophy) C++ is riddled with types explicitly making this uh, choice.

Its not near the top of the list of reasons std::unordered_map is a crap type, but it's certainly on there. If we choose the capacity explicitly knowing we'll want to put no more than 8340 (key,value) pairs into Rust's HashMap we only allocate once, to make enough space for all 8340 pairs because duh, that's what capacity means. But std::unordered_map doesn't take the hint, it merely makes its internal hash table big enough, and each of the 8340 pairs is allocated separately.


Right, he says:

> If you want to make pointers not have a nil state by default, this requires one of two possibilities: requiring the programmer to test every pointer on use, or assume pointers cannot be nil. The former is really annoying, and the latter requires something which I did not want to do (which you will most likely not agree with just because it doesn’t seem like a bad thing from the start): explicit initialization of every value everywhere.

In Kotlin (and Rust, Swift, ...) these are not the only options. You can check a pointer/reference once, and then use it as a non-nullable type afterwards. And if you don't want to do that, you can just add !!/!/unwrap: you are just forced to explicitly acknowledge that you might blow up the entire app.


stacked diffs are the best approach and working at a company that uses them and reading about the "pull request" workflow that everyone else subjects themselves to makes me wonder why everyone is not using stacked diffs instead of repeating this "squash vs. not squash" debate eternally.

every commit is reviewed individually. every commit must have a meaningful message, no "wip fix whatever" nonsense. every commit must pass CI. every commit is pushed to master in order.


This just skips the:

> First, if you aren't writing device drivers/kernels or something very low level there is a high probability your program will have zero unsafe usages in it.

from the original comment. Meanwhile all C code is implicitly “unsafe”. Rust at least makes it explicit!

But even if you ignore memory safety issues bypassed by unsafe, Rust forces you to handle errors, it doesn’t let you blow up on null pointers with no compiler protection, it allows you to represent your data exhaustively with sum types, etc etc etc


Isn’t rust proffered up as a systems language? One that begged to be accepted into the Linux kernel?

Don’t device drivers live in the Linux kernel tree?

So, unsafe code is generally approved in device driver code?

Why not just use C at that point?


Because writing proper kernel C code requires decades of experience to navigate the implicit conventions and pitfalls of the existing codebase. The human pipeline producing these engineers is drying up because nobody's interested in learning that stuff by going through years of patch rejection from maintainers that have been at it since the beginning.

Rust's rigid type system, compiler checks and insistence on explicitness forces a _culture change_ in the organization. In time, this means that normal developers will regain a chance to contribute to the kernel with much less chance of breaking stuff. Rust not only makes compiled binary more robust but also makes the codebase more accessible.


I am quite certain that someone who has been on HN as long as you have is capable of understanding the difference between 0% compiler-enforced memory safety in a language with very weak type safety guarantees and 95%+ of code regions even in the worst case of low-level driver code that performs DMA with strong type safety guarantees.


Please explain the differences in typical aliasing rules between C and Rust. And please explain posts like

https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/

https://news.ycombinator.com/item?id=41947921

https://lucumr.pocoo.org/2022/1/30/unsafe-rust/


The first two is the same article, but they point out that certain structures can be very hard to write in rust, with linked lists being a famous example. The point stands, but I would say the tradeoff is worth it (the author also mentions at the end that they still think rust is great).

The third link is absolutely nuts. Why would you want to initialize a struct like that in Rust? It's like saying a functional programming language is hard because you can't do goto. The author sets themselves a challenge to do something that absolutely goes against how rust works, and then complains how hard it is.

If you want to do it to interface with non-rust code, writing a C-style string to some memory is easier.


You phrase that as if 0-5% of a program being harder to write disqualifies all the benefits of isolating memory safety bugs to that 0-5%. It doesn't.


And it can easily be more than 5%, since some projects both have lots of large unsafe blocks, and also the presence of an unsafe block can require validation of much more than the block itself. It is terrible of you and overall if my understanding is far better than yours.

And even your argument taken at face value is poor, since if it is much harder, and it is some of the most critical code and already-hard code, like some complex algorithm, it could by itself be worse overall. And Rust specifically have developers use unsafe for some algorithm implementations, for flexibility and performance.


> since if it is much harder, and it is some of the most critical code and already-hard code, like some complex algorithm, it could by itself be worse overall.

(Emphasis added)

But is it worse overall?

It's easy to speculate that some hypothetical scenario could be true. Of course, such speculation on its own provides no reason for anyone to believe it is true. Are you able to provide evidence to back up your speculation?


Even embedded kernels can and regularly do have < 5% unsafe code.


Is three random people saying unsafe Rust is hard supposed to make us forget about C’s legendary problems with UB, nil pointers, memory management bugs, and staggering number of CVEs?

You have zero sense of perspective. Even if we accept the premise that unsafe Rust is harder than C (which frankly is ludicrous on the face of it) we’re talking about a tiny fraction of the overall code of Rust programs in the wild. You have to pay careful attention to C’s issues virtually every single line of code.

With all due respect this may be the singular dumbest argument I’ve ever had the displeasure of participating in on Hacker News.


> Even if we accept the premise that unsafe Rust is harder than C (which frankly is ludicrous on the face of it)

I think there's a very strong dependence on exactly what kind of unsafe code you're dealing with. On one hand, you can have relatively straightforwards stuff like get_unsafe or calling into simpler FFI functions. On the other hand, you have stuff like exposing a safe, ergonomic, and sound APIs for self-referential structures, which is definitely an area of active experimentation.

Of course, in this context all that is basically a nitpick; nothing about your comment hinges on the parenthetical.


[flagged]


> Shold one compare Rust with C or Rust with C++?

Well, you're the one asking for a comparison with C, and this subthread is generally comparing against C, so you tell us.

> Modern C++ provides a lot of features that makes this topic easier, also when programs scale up in size, similar to Rust. Yet without requirements like no universal aliasing. And that despite all the issues of C++.

Well yes, the latter is the tradeoff for the former. Nothing surprising there.

Unfortunately even modern C++ doesn't have good solutions for the hardest problems Rust tackles (yet?), but some improvement is certainly more welcome than no improvement.

> Which is wrong

Is it? Would you be able to show evidence to prove such a claim?


The only thing I really found weird syntactically when learning it was the single quote for lifetimes because it looks like it’s an unmatched character literal. Other than that it’s a pretty normal curly-braces language, & comes from C++, generic constraints look like plenty of other languages.

Of course the borrow checker and when you use lifetimes can be complex to learn, especially if you’re coming from GC-land, just the language syntax isn’t really that weird.


Agreed. In practice Rust feels very much like a rationalized C++ in which 30 years of cruft have been shrugged off. The core concepts have been reduced to a minimum and reinforced. The compiler error messages are wildly better. And the tooling is helpful and starts with opinionated defaults. Which all leads to the knock-on effect of the library ecosystem feeling much more modular, interoperable, and useful.


It's really an ML with type classes and a better syntax (and a non-stupid module sublanguage) that also just happens to be more C-like.


I feel like it is the opposite, Go gives you a ton of rope to hang yourself with and hopefully you will notice that you did: error handing is essentially optional, there are no sum types and no exhaustiveness checks, the stdlib does things like assume filepaths are valid strings, if you forget to assign something it just becomes zero regardless of whether it’s semantically reasonable for your program to do that, no nullability checking enforcement for pointers, etc.

Rust OTOH is obsessively precise about enforcing these sort of things.

Of course Rust has a lot of features and compiles slower.


> error handing is essentially optional

Theoretically optional, maybe.

> the stdlib does things like assume filepaths are valid strings

A Go string is just an array of bytes.

The rest is true enough, but Rust doesn't offer just the bare minimum features to cover those weaknesses, it offers 10x the complexity. Is that worth it?


I think the standard convention if you just want a stringly-typed error like Go is anyhow?

And maybe not quite as standard, but thiserror if you don’t want a stringly-typed error?


Google’s terminal level is one past new grad and it has a full parallel non-management IC track, I don’t think that they’re pushing people that hard into leadership roles.


Google lets people stay at L4 forever and Meta does at L5 with no expectation of further growth.

Yes the expectations are probably still higher, but these companies don’t expect everyone to grow past “mostly self-sufficient engineer” as the parent comment suggests, and for people that do want to do that there’s a full non-management path to director-equivalent IC levels. My impression is that small companies are more likely to treat management as a promotion rather than as a lateral move to a different track (whenever I hear “promoted to manager” I kinda shudder)


Depends on the team — managing can be quite a bit more scope than being a senior IC, depending on expectations for that role. You have broader ownership of technical outcomes over time, even aside from the extra responsibility for growing a team. Managers have all the responsibility of a senior engineer plus more. In that way manager feels to me like a clear promotion to me. Manager vs staff eng, maybe not though.


Management not being a promotion doesn’t mean that managers aren’t (usually—I’ve both been at equal and higher levels than my managers at times) higher levels than their reports. It means that switching to a management role from IC is never a promotion itself (ie always L6 -> M1 in Google/Meta levels) and it never comes with any difference in compensation.


I haven't been a manager, but my understanding is that the higher IC roles assume you're competent enough to do some management-like things if needed ("responsibility without control"), and I also assume that being a manager helps with compensation because they actually teach you how the review process works and let you into the calibration meetings.


I don’t think being popular with the players is entirely irrelevant for players in team sports. Locker room cohesion matters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: