Hacker Newsnew | past | comments | ask | show | jobs | submit | munksbeer's commentslogin

I must be doing something very different from the anti-AI people on here. It is ridiculously empowering.

Got an issue in production? Give your agent the knowledge of how to locate the logs, and where the codebase is, and ask it to diagnose, and off it goes. It almost always finds the issue, and while it has been doing that, I've been able to get on with more productive things.

In terms of coding, if you work on it, and give it the correct guidelines, guardrails and ability to check its own work, it produces very high quality results.

The worst part is in such a short space of time I just don't think I can ever back to normal coding. I don't mind that, but it sucks when I'm offline.

I honestly don't know what people are doing wrong, or what sort of code they're writing that they can't get an AI to work well for them.


For who?

A lot of institutional investors get caught out all the time when they make mistakes about the fundamentals.

> What is really going to be that difficult with space-based compute?

Stopping some random rogue nation blowing it up.


Having a space program is extremely difficult, much more difficult than blowing up basically anything on earth.

I agree. And that stuff is soul destroying. I have done it, and right now I work in a place a little smaller, but we get so much done without all the cruft. And we get it done better. I spend much more time writing code now (*) than at the big corps, and we do a much better job because we can iterate.

(*) Well, now claude spends a lot of time writing code, I spend a lot of time designing and steering it. Claude can write remarkably sophisticated code with the correct steering.*


There is almost nothing new in computer programming. 99.999% of any code most of us on this forum write will be repeating patterns that have been written thousands of times before.

Tell a coding agent what your new thing needs to do, give it the absolute constraints, max response times, max failover times, and so on, tell it which technologies it has access to or could use, and then tell it to spend a lot of time going over and over the design, coming up with an initial X number of designs (I use 5), and then it must self criticise each one of them and weigh them up, narrow down to three, before finally presenting those three options to the user.

Now you read the options, understand them, realise that the AI has either converged on something very sensible, or it has missed something, so you tell it what it missed and iterate. Or it nailed something good, you pick the option you prefer, and tell it to come up with a more fleshed out high level design, describing the flow and behaviour deeply (NO CODE REFERENCES!). Then once you're happy, tell it to use that and write a comprehensive coding plan. Tell it specifically what coding patterns you prefer (you should have these in your AGENTS.md file already), what patterns to avoid (single threaded? multi-threaded? Avoid gc? How you typically deal with error conditions, etc etc).

Then have it start iteratively working on the coding plan, and it *MUST* have a strong feedback loop. If there is no feedback loop initially, I tell it to build one. It must be able to write very fluent integration tests (not just unit tests). It must be able to run the app and read the logs.

Do all this and I bet you get a better result that 80% of developers out there. Coding agents are extremely good when used well.


Disclaimer: I love writing production systems in Java. I was a C++ programmer for 10 years before moving to Java about 15 years ago. Java offers a virtually all in one package when writing large systems. You have a single language where you can write code that doesn't care to be the fastest possible, and you just rely on ZGC to do its thing, and it works. Or you can write GC free code with a mostly quite performant SoA type approach. You can do this in the same codebase, and developers don't need to know different languages to write either style of code. You then have one build system, one deployment system, an incredible set of observability tooling, etc, etc.

So I might be biased, but with the correct curation of AGENTS.md files and skills, we're getting extremely good results using Claude Code writing Java.

Another disclaimer: I haven't tried with another language, but we're happy with the results.


Would be interesting to find out what kind of production systems you write in Java and how you deploy / scale them. What DB backends you use, caching, etc. And whether you're also on Spring.

Always finance, trading systems. In the last 15 years mostly what they call "front office".

At the moment, for the place I work, we deploy on AWS mostly (because that is where our target trading venues often are). DB backends are largely not something we think about too much, because all of that is done out of band of course as a final state. Our main persistence is through our "bus" using aeron, and everything starts and recovers from there. This is not your typical enterprise java. No Spring.


Ok that's quite interesting. Am I correct to presume this is crypto trading? I was under the impression most regular HFT is near the exchanges, or physically at the exchange in a DC. Unless it's an AWS Outpost or something.

>My experience is that there's a correlation between powerful type systems and the property that once your program compiles, it's correct. Compiles == correct is rarely true in C or JavaScript. It's often true in Haskell and Rust.

I find this staggeringly hard to believe. Most bugs are logic errors. How does Rust or Haskell prevent these?


Haskell gives you quite a powerful set of tools for constraining and reasoning about your program's behavior. For instance, its ability to define pure functions and control side effects is a very powerful tool for preventing certain classes of bugs. Dereferencing invalid pointer locations and out of bounds array lookups are large classes of bugs in mainstream languages that Haskell basically eliminates entirely. It's not at all the same thing as what you get from the type systems in languages like Java, C++, etc. You really have to try it to appreciate it.

> Most bugs are logic errors.

Are they? IME most bugs are type errors.

Or rather, IME most bugs are logic errors only because I've excluded the possibility of type errors by using a sophisticated type system.


Most of my bugs are logic errors. I write Java. Your comment seems to imply that moving to Rust or Haskell would make a correct program if it compiles.

I don't think porting your program to Haskell would make your program correct.

I think porting your program to Haskell would make all of your bugs logic errors, rather than only most of them.


> If the code were written in Java, I'd have more to read.

That is not really the downside people think it is. Java is a remarkably easy language to read and understand.


> I have done a lot of introspection on this and realized that I'm very much driven by intrinsic rewards moreso than extrinsic. > I got into coding over a decade before it was my career because of the exploration, learning, and puzzle/challenge aspect. .. > LLMs take all the intrinsic wins and leaves only the extrinsic ones.

I'm not sure I understand this. For me, programming was at first a tool to use to satisfy my curiousity. When I first started coding I knew nothing about software patterns, how I should be naming my variables, length of functions, DOD vs OOP, functional vs imperative, single responsibility principle, and on and on.

I wrote a mess of a program and got it to do very cool things (for me). I loved it.

Then I got taught more, got my first jobs, learned why programming large systems needs standards, patterns, etc. I became good at that, and have had a long lucrative career out of it.

But I cannot wait for the day when I no longer need to earn money from programming and I can go back to using it just to do "cool shit". At that point, whether I am hacking and slashing myself, or working with an LLM to do something, I don't care. It is the intrinsic goal of solving a puzzle and programming just happens to be the tool I use.

Thinking more deeply about your words, is it that you enjoy figuring out the instructions to use to solve a problem? In other words, figuring out the algorithm and writing the code out to create something? Would you feel if you just tell the LLM what you want to create and it does it, you've lost the enjoyment?


> Thinking more deeply about your words, is it that you enjoy figuring out the instructions to use to solve a problem? In other words, figuring out the algorithm and writing the code out to create something? Would you feel if you just tell the LLM what you want to create and it does it, you've lost the enjoyment?

So there's a lot of nuance to the "is it that you enjoy figuring out the instructions to use to solve a problem".

At the surface I don't enjoy typing, I don't enjoy fighting syntax checkers, rust's borrow checker, or manual memory management in my personal C projects, typing out the HDL for nand2tetris problems, etc...

However, there have been studies done decades before this LLM boom about the psychological concept called the Generation Effect. While everyone is different and it's not completely black and white, the studies have found that people learn more by actual practice (the act of doing) than just by reading material. That's 100% the case for me.

I can read blogs and resources till the cows come home and I'll have a very surface understanding of a concept. Then I'll go to write the code to implement it and it rarely works right away because there are demonstrable gaps in my understanding. I'll debug it and iterate on it until it works, and that is what actually solidifies the mental model of what I was trying to learn in my mind. Not only can I say for sure that I remember it better, that seems to form connections in my brain that allows me to apply it in other use cases, or build fascinating technical tangents.

I not only get my high from that initial "Aha!" moment when I really feel like I understand a concept enough to actually apply it in other scenarios, but I also get my high from tangents that spawn off of that concept.

In many cases, I can map a direct line of my personal projects to a set of root projects that spawned them off because of ideas I came up with while actually implementing the projects. Since I tried real hard to optimize a C# game engine for an embedded platform, I realized where limitations were and it solidified my knowledge of how old game consoles worked.

This led me down to the interest of creating a GPU out of embedded device that I can pair with I/O constrained embedded devices. This taught me soooo much about the embedded space, and while I heavily improved my C writing abilities it also made me wish I could write C# on embedded.

Since I had learned C for the embedded project (and I knew MSIL from previous deep dives), I realized I can just translate MSIL into C and that would allow me to run C# anywhere (got C# working on an SNES, the linux kernel, and on an ESP32S3).

By implementing that by hand and coming face to face with many small decisions I had to make, that solidified a bunch of concepts in my head around intermediate representations and why they are a massive benefit. Those aha moments (among others) then led me down the path to implementing a just-in-time compilation engine for NES games and the C64 OS into the .net runtime.

The learnings from that have already spawned some other ideas in my mind, which is why I'm now learning Verilog and FPGA development.

None of these projects solved any useful problem (as in nothing was created that I or anyone else would use). The satisfaction and the high I got from them was having the curiosities of a problem, ideas of a solution, and persevering (partially due to being stubborn) through it and actually accomplishing it. The satisfaction that I actually understand the concepts at a foundational level, which actually ends up breeding excitement for a whole other tangent/problem.

These learnings have indirectly helped me in my day job as well. While I'm not working on anything that sophisticated or cool, all of these actual implementations I've learned have given me direct learnings I have been able to successfully use to create better software in other domains.

So it's not the actual typing I enjoy, but the whole picture of what comes out of the end through that typing. LLMs take most of that away. It lets me ideate on a vague solution and then it goes ahead and implements it for me. Even if I'm specific on the details of the algorithm it uses, it subtly fills in the blanks and the missing pieces that I haven't cemented in my brain yet thus making me miss out of the opportunity to do so.

And it steals the accomplishment of the final thing existing. I don't feel an accomplishment by typing in google "I need a C# to C transpiler" and just downloading it. That's what LLMs feel like, even if I'm trying to steer them at a lower architectural level. I don't have the aha moments, I don't have the learnings, and I'm disconnected from the code.

Thus it feels like it's stealing all the intrinsic rewards from me, only leaving the extrinsic ones. And those are not rewards I am particularly motivated by.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: