Don’t be discouraged. The work that you enjoy doing is still here, and will still be here after you’ve graduated.
My best advice to you would be to learn CS the hard way (without AI).
Ignore the “AI learning tools” see on HN or mentioned by peers. Learning should be challenging so if it feels like a shortcut, it probably is. Don’t fall into that trap and you’ll be a more competent developer as a result, both with and without AI
> The Supreme Court previously rejected Thaler's request to hear his argument in a separate case involving prototypes for a beverage holder and a light beacon concerning whether AI-generated inventions should be eligible for U.S. patent protection. His patent applications were rejected by the U.S. Patent and Trademark Office on similar grounds.
Microsoft isn’t going to declare death of the PC and pivot to “cloud computers”/virtual desktops (again) just because of temporary RAM/SSD supply shortages lol
> And Amazon CEO just said it out loud about cloud computers.
And Google said Stadia would have “negative latency”
Perhaps I worded that poorly. I agree that technically this is an injection. What I don't think is accurate is to then compare it to sql injection and how we fixed that. Because in SQL world we had ways to separate control channels from data channels. In LLMs we don't. Until we do, I think it's better to think of the aftermath as phishing, and communicate that as the threat model. I guess what I'm saying is "we can't use the sql analogy until there's a architectural change in how LLMs work".
With LLMs, as soon as "external" data hits your context window, all bets are off. There are people in this thread adamant that "we have the tools to fix this". I don't think that we do, while keeping them useful (i.e. dynamically processing external data).
Reading works when you generate 50 lines a day. When AI generates 5,000 lines of refactoring in 30 seconds, linear reading becomes a bottleneck. Human attention doesn't scale like GPUs. Trying to "just read" machine-generated code is a sure path to burnout and missed vulnerabilities. We need change summarization tools, not just syntax highlighting
Whether you or someone/something else wrote it is irrelevant
You’re expected to have self-reviewed and understand the changes made before requesting review. You must to be able to answer questions reviewers have about it. Someone must read the code. If not, why require a human review at all?
Not meeting this expectation = user ban in both kernel and chromium
This is exactly the gap I'm worried about. human review still matters, but linear reading breaks down once the diff is mostly machine-generated noise.
Summarizing what actually changed before reading feels like the only way to keep reviews sustainable.
My best advice to you would be to learn CS the hard way (without AI).
Ignore the “AI learning tools” see on HN or mentioned by peers. Learning should be challenging so if it feels like a shortcut, it probably is. Don’t fall into that trap and you’ll be a more competent developer as a result, both with and without AI
reply