Hacker Newsnew | past | comments | ask | show | jobs | submit | jpc0's commentslogin

Golang would like a word...

I would like Python a lot better if the tab character had been marked as a syntax error. Would have solved a lot of bullshit.

For a company codebase maybe not but for solo projects yeah it can be, what exact options do you enable in clang tidy? Have they changed since the last version you used and now you need to change the config? Do you run cpp-check?

This version of Qt has leaks on exit so you need to ignore them when running asan/valgrind etc...

I agree it's not that hard and should be standard, same regarding enabling all warnings that are reasonable and enable warnings as errors.


I want to reiterate the points other have brought up.

Learn more about whatever domain you are writing software for, then ask what skills you need to improve to solve a problem in that domain.

Also do not get hung up on specific languages or paradigms, the overarching patterns are universal and learning them makes things significantly easier to implement.

Finally the most generic advice, Algorithms and Data structures. When you start thinking in terms of "This is the data in memory" and "This is the algorithm manipulating the data" you will "level up" quickly. Software isn't cats and dogs and random objects. Thinking in objects can be a decent way to model systems but when you get into the details you are operating on instructions and data not on objects.


> A hard lock which requires a reboot or god forbid power cycling is the worst possible outcome

Hilariously this happens on windows too.

Actually everything you said windows and mac doesn't do they do, if you put on a ton a memory pressure the system becomes unresponsive and locks up...


I've OOMd on my mac several times, and it has never gone completely unresponsive.

You get an OOM dialog with a list of apps that you can have it kill.


I feel I just need to run a slightly too large LLM with too much context on a MBP, and it's enough to slow it down irreparably until it suddenly hard resets. Maybe the memory pressure it does that at is much higher though compared to Linux?


I would guess solved but bot easy.

WebRTC makes it possible by timing is still limited to NTP who has is far above one sample, you couldn’t possibly get sample accurate playback but you could get it to within a mS or so.

Depending on how far away your sources are that might be fine, for instance two speakers in two rooms where you won’t get significant phase issues this is trivial to do(well trivial is an understatement but you can do it purely with web technologies).


Rust prevents the footgun, but also prevents shooting in the direction where your foot would be even if it isn't there.

There are absolutely places where that is required and in Rust those situations become voodoo to write.

C++ be default has more complexity but has the same complexity regardless of domain.

Rust by default has much less complexity, but in obscure situations outside of the beaten path the complexity dramatically ramps up far above C++.

This is not an argument for or against either language, it's a compromise on language design, you can choose to dislike the compromise but that doesn't mean it was the wrong one, it just means you don't like it.

A relatively simple but complex example, I want variable X to be loaded into a registerer in thos function and only written to memory at the end of the function.

That is complex in C/C++ but you can look at decompilation and attempt to coerce the compiler into that.

In rust everything is so abstracted I wouldn't know where to begin looking to coerce the compiler into generating that machine code and might just decide to implement it in ASM, which defeats the point of using a high level language.

Granted you might go the FFMPEG route ans just choose to do that regardless but rust makes it much harder.

You don't always need that level of control but when you do it seems absurdly complex.


> I want variable X to be loaded into a registerer in thos function and only written to memory at the end of the function.

> That is complex in C/C++ but you can look at decompilation and attempt to coerce the compiler into that.

> In rust everything is so abstracted I wouldn't know where to begin looking

I don't know if I fully understand what you want to do, but (1) controlling register allocation is the realm of inline asm, be it in C, C++, or Rust. And (2) if "nudging" the compiler is what you want, then it's literally the same thing in Rust as in C++, it's a matter of inspecting the asm yourself or plonking your function onto godbolt.


I think the issue is that naive translation of C into ASM, which is somewhat simulated by -O0, is usable in C, while it isn't in Rust.


This.

I agree that you will probably just end up writing ASM but it was a trivial example, there are non-trivial examples involving jump tables and unrolling loops etc.

Effectively weird optimisations that rely on the virtual machine the compiler is building for vs reality, there's just more abstractions with rust than with C++ by the virtue of the safety mechanism, it's just plain not possible to have the one without the other.

The hardware can do legal things that rust cannot allow or can allow but you need to write extremely convoluted code, C/C++ is closer to the metal in that regard.

Don't get me wrong I am all for the right abstractions, it allows insane optimisations that humans couldn't dream of, but there is a flip side.


My high level understanding of the UB concept is, that it means false positives to the question "Is that a valid program?". Given that the philosophy of C is mostly "Do what the programmer wrote, no questions asked.", it leads to designing the language so, that the probability of false negatives goes to zero. This obviously means, that the number of false positives goes up.

Rust basically takes the opposite approach of making false positives go to zero, which makes the false negatives go up, which you need to work around with unsafe or type gymnastics.

The third approach is to make both false positives and negatives be zero, by restricting the set of programs, which is what non systems languages do.


Are you sure claude didn't do exactly the same thing but the harness, claude code, just hid it from you?

I have seen AI agents fall into the exact loop that GP discussed and needed manual intervention to fall out of.

Also blindly having the AI migrate code from "spaghetti C" to "structured C++" sounds more like a recipe for "spaghetti C" to "fettuccine C++".

Sometimes its hidden data structures and algorithms you want to formalize when doing a large scale refactor and I have found that AIs are definitely able to identify that but it's definitely not their default behaviour and they fall out of that behaviour pretty quickly if not constantly reminded to do so.


> Are you sure claude didn't do exactly the same thing but the harness, claude code, just hid it from you?

What do you mean? Are you under the impression I'm not even reading the code? The code is actually the most important part because I already have working software but what I want is working software that I can understand and work with better (and so far, the results have been good).


Reading the code and actually understanding the code is not that the same thing.

"This looks good", vs "Oh that is what this complex algorithm was" is a big difference.

Effectively, to review that the code is not just being rewritten into the same code but with C++ syntax and conventions means you need to understand the original C code, meaning the hard part was not the code generation (via LLM or fingers) but the understanding and I'm unsure the AI can do the high level understanding since I have never gotten it to produce said understanding without explicitly telling it.

Effectively, "x.c, y.c, z.c implements a DSL but is convoluted and not well structured, generate the same DSL in C++" works great. "Rewrite x.c, y.c, z.c into C++ buildings abstractions to make it more ergonomic" generally won't recognise the DSL and formalise it in a way that is very easy to do in C++, it will just make it "C++" but the same convoluted structure exists.


> Reading the code and actually understanding the code is not that the same thing.

Ok. Let me be more specific then. I'm "understanding" the code since that's the point.

> I'm unsure the AI can do the high level understanding since I have never gotten it to produce said understanding without explicitly telling it.

My experience has been the opposite: it often starts by producing a usable high-level description of what the code is doing (sometimes imperfectly) and then proposes refactors that match common patterns -- especially if you give it enough context and let it iterate.

> "Rewrite x.c, y.c, z.c into C++ buildings abstractions to make it more ergonomic" generally won't recognise the DSL and formalise it in a way that is very easy to do in C++, it will just make it "C++" but the same convoluted structure exists.

That can happen if you ask for a mechanical translation or if the prompt doesn't encourage redesign. My point was literally make it well-designed idiomatic C++ and it did that. Inside of the LLM training is a whole bunch of C++ code and it seems to be leaning on that.

I did direct some goals (e.g., separating device-specific code and configuration into separate classes so adding a device means adding a class instead of sprinkling if statements everywhere). But it also made independent structural improvements: it split out data generation vs file generation into pipeline/stream-like components and did strict separation of dependencies. It's actually well designed for unit testing and mocking even though I didn't tell it I wanted that.

I'm not claiming it has human-level understanding or that it never makes mistakes -- but "it can't do high-level understanding" doesn't match what I'm seeing in practice. At minimum, it can infer the shape of the application well enough to propose and implement a much more ergonomic architecture, especially with iterative guidance.

I had to have it introduce some "bugs" for byte-for-byte matching because it had generalized some of the file generation and the original C code generated slightly different file structures for different devices. There's no reason for this difference; it's just different code trying to do the same thing. I'll probably remove these differences when the whole thing is done.


That clarifies a lot.

So effectively it was at least partly guided refactoring. Not blind vibe coding.


> Which is a huge risk factor for Rust, especially in today's context of the Linux kernel. If I have an object created/handled by external native code, how do I make sure that it respects Rust's lifetime/aliasing rules?

Can you expand on this point? Like are you worried about whether the external code is going to free the memory out from under you? That is part of a guarantee, the compiler cannot guarantee what happens at runtime no matter what the author of a language wants, the CPU will do what it's told, it couldn't care about Rusts guarantees even if you built your code entirely with rust.

When you are interacting with the real world and real things you need to work with different assumptions, if you don't trust that the data will remain unmodified then copy it.

No matter how many abstractions you put on top of it there is still lighting in a rock messing with 1s and 0s.


This is more akin to selling a car to an adult that cannot drive and they proceed to ram it through their garage door.

It's perfectly within the capabilities of the car to do so.

The burden of proof is much lower though since the worst that can happen is you lose some money or in this case hard drive content.

For the car the seller would be investigated because there was a possible threat to life, for an AI buyer beware.


I think the general public has a MUCH better grasp on the potential consequences of crashing a car into a garage than some sort of auto-run terminal command mode in an AI agent.

These are being sold as a way for non-developers to create software, I don't think it's reasonable to expect that kind of user to have the same understanding as an actual developer.

I think a lot of these products avoid making that clear because the products suddenly become a lot less attractive if there are warnings like "we might accidentally delete your whole hard drive or destroy a production database."


I don't think that's entirely true. Seeking mastery does not imply being a master.

If you have only ever seen one pattern to solve a problem, trivial example of inheritance, and therefore do that to the best of your ability then you have achieved mastery to your ability. Once you see a different pattern, composition, you can then master that and master identifying when which is suitable.

Lack of mastery is just using inheritance despite seeing alternative patterns.

Naturally mastery also includes seeking alternative solutions but just because a codebase uses inferior patterns does not mean those that came before did not strive towards mastery, it's possible that they didn't know better at the time and now cannot get the time to revise the work.

There's always a juggling act in the real world.

Assume incompetence and not malice, and incompetence is not a state of being. A person without experience can be seen as incompetent but quickly become competent with training or experience, but the code they write still stems from incompetence.

Strive to see your previous self as incompetent (learn something new every day)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: