This suggests HN is functioning as designed. Votes signal agreement while comments surface disagreement.
Negative posts outperform because they create unfinished cognitive work. A clean, agreeable story closes the loop, a contested claim or engagement opens and follows the open loop.
I’ve been vibe-coding replacements for the tools I actually use every day. So far: a Notepad, a REST client, a snipping tool, a local gallery and lightweight notes for iPhone.
The motivation isn't novelty. It's control. I don't need ads, onboarding flows and popups, AI sidebars, bloated menus, unnecessary network calls , etc. A notepad should never touch the network. A REST client shouldn’t ship analytics or auto update itself mid-request.
No plugin system. No extensibility story. Just plain/simple software.
As I build these, I have been realizing how much cognitive overhead we’ve normalized in exchange for very little utility.
big fan of this and have been doing similar. i just got to a good state with my Linear clone app. im planning to do a REST client soon, how'd that go for you?
Nice, Linear is a perfect target for this mindset.
The REST client went surprisingly smoothly once I committed to keeping it boring.
I'm building Mac apps in Xcode, and I keep multiple small apps in a single Xcode project with a few shared libraries (basic UI components, buttons, layout helpers, etc. to keep all my apps similar).
The REST client is literally just another target + essentially one extra file on top of that. No workspaces, no collections, no sync, no plugins. Just method, URL, headers, body, hit send, show response. Requests are saved and loaded as plain local JSON.
What surprised me is how little code is actually required once you strip away "product features".
I feel like being in a time loop. Every time a big company releases a model, we debate the definition of open source instead of asking what actually matters. Apple clearly wants the upside of academic credibility without giving away commercial optionality, which isn't unsurprising.
Additionally, we might need better categories. With software, flow is clear (source, build and binary) but with AI/ML, the actual source is an unshippable mix of data, infra and time, and weights can be both product and artifacts.
I'm glad you said it. Incredible tech and the top comment is debating licensing. The demos I've seen of this are incredible and it'll be great taking old photos (that weren't shot with a 'spatial' camera) and experiencing them in VR. I think it sums up the Apple approach to this stuff (actually impacting peoples lives in a positive way) vs the typically techie attitude.
AI stays the top story but in a boring way as novelty wears off and models get cheaper and faster (maybe even more embedded). No AGI moment. LLMs start feeling like databases or cloud compute.
No SpaceX or OpenAI IPO moment. Capital markets quietly reward the boring winners instead. S&P 500 grinds out another double digit year, mostly because earnings keep up and alternatives still look worse. Tech discourse stays apocalyptic, but balance sheets don't.
If you mute politics and social media noise, 2026 probably looks like one of those years that we later remember as "stable" in retrospect.
> If you mute politics and social media noise, 2026 probably looks like one of those years that we later remember as "stable" in retrospect.
I love this, we focus way too much on the apparent chaos of daily life. Any news seems like a big wave that announces something bigger and we spend our time (especially here!) imagining the tsunami to come. Then later, we realize that most events are just unimportant to the point we forgot about them.
To me, this is wishful thinking. The more I see these "our jobs are safe" claims, the more I fear our jobs are not safe, and people are just trying to convince themselves which is an indicator of turmoil ahead.
What does "safe" mean? Unemployment in the US right now is under 5% which is historically very good (even though it's been slightly trending upwards over the past few months).
Keep in mind this is supporting the gif economy too. Lots of people are underemployed not necessarily unemployed because they start driving Uber for example instead of just wanting to sit at home after a job loss.
Employed. My contention is the AI is getting so good at doing tech related things that you'll need far fewer employees. I think Claude Code 4.5 is already there. Honestly, it just needs to permeate the market.
I agree that Claude Code is a lot more effective than I was expecting, but I don't think it can fully replace human software engineers, nor do I think it's on any trajectory to do so. It does make senior engineers a lot more productive so I could see it reducing some demand for new grad software engineers.
The dream never dies, possibly because people remember when class time was supplanted by a movie. Anyone remember "I Am Joe's Heart"? Those movies showed that you could just sit and watch passively like TV, and you'd learn quite a bit, with professional diagrams and animations to help.
Yet your comment is true. Perhaps the difference is that science is inherently interesting because nature is confined to things that are consistent and make sense, while the latest security model for version 3.0 of this-or-that web service protocol, vs. version 2.0, is basically arbitrary and resists effective visual diagramming. Learning software (not computer science) is an exercise in memorizing things that aren't inherently interesting.
Charging by minute might push people toward shorter, noisier and more fragmented pipelines. It feels more like a lever to discourage selfhosting over time.
It's not outrageous money today, but it's a clear signal about where they want CI to live.
Us software engineers assume value comes from serving more people, faster, with less friction. But many of the things that actually make life feel coherent such as learning a craft, maintaining friendships and building tools for one person, only work because they’re slow and specific.
Tech doesn't give us the wrong desires but the easier versions of the right ones, and those end up hollow.
I have an Apple TV and I’ve been running iSponsorBlockTV [1] on my Synology box for a while. It auto-skips the sponsored segments and with Youtube premium, it gives me a clean, ad-free setup.
I can’t stand those in-video intros or sponsored promos, where I’m suddenly pitched a random VPN or productivity app.
This is good. It’s fascinating how it spins up interactive pages instantly. Some of the mini-apps actually feel useful, but others break in ways you wouldn’t expect.
I’m curious to see how it evolves with more complex, multi-step queries.
This vulnerability is basically the worst-case version of what people have been warning about since RSC/server actions were introduced.
The server was deserializing untrusted input from the client directly into module+export name lookups, and then invoking whatever the client asked for (without verifying that metadata.name was an own property).
return moduleExports[metadata.name]
We can patch hasOwnProperty and tighten the deserializer, but there is deeper issue. React never really acknowledged that it was building an RPC layer. If you look at actual RPC frameworks like gPRC or even old school SOAP, they all start with schemas, explicit service definitions and a bunch of tooling to prevent boundary confusion. React went the opposite way: the API surface is whatever your bundler can see, and the endpoint is whatever the client asks for.
My guess is this won't be the last time we see security fallout from that design choice. Not because React is sloppy, but because it’s trying to solve a problem category that traditionally requires explicitness, not magic.
To me it just looks like unacceptable carelessness, not an indictment of the alleged "lack of explicitness" versus something like gRPC. Explicit schemas aren't going to help you if you're so careless that, right at the last moment, you allow untrusted user input to reference anything whatsoever in the server's name space.
But once that particular design decision is made it is a question of time before that happens. The one enables the other.
The fact that React embodies an RPC scheme in disguise is quite obvious if you look at the kind of functionality that is implemented, some of that simply can not be done any other way. But then you should own that decision and add all of the safeguards that such a mechanism requires, you can't bolt those on after the fact.
The endpoint is not whatever the client asks for. It's marked specifically as exposed to the user with "use server". Of course the people who designed this recognize that this is designing an RPC system.
A similar bug could be introduced in the implementation of other RPC systems too. It's not entirely specific to this design.
so any package could declare some modules as “use server” and they’d be callable, whether the RSC server owner wanted them to or not? That seems less than ideal.
The vulnerability exists in the transport mechanism in affected versions. Default installs without custom code are also vulnerable even if they do not use any server components / server functions.
Eval has been known to be super dangerous since before the internet grew up and went mainstream. It is so dangerous that to deploy stuff containing it should come with a large flashing warning whenever you run it.
Half of web map solutions rely on workers, which can't be easily loaded from 3rd party origins, so are loaded as blobs. loading worker from blob is effectively an eval.
No, their whole point is that what they are doing is the literal equivalent of calling eval. Whether that actually uses the word 'eval' or a function called 'eval' is besides the point.
Architecturally there appears to be an increasingly insecure attack surface appearing in JavaScript at large, based on the insecurities in mandatory dependencies.
If the foundation and dependencies of react has vulnerabilities, react will have security issues indirectly and directly.
This explicit issue seems to be a head scratcher. How could something so basic exist for so long?
Again I ask about react and next.js from their perspective or position of leadership in the JavaScript ecosystem. I don’t think this is a standard anyone wants.
Could there be code reviews created for LLMs to search for issues once discovered in code?
To be fair, the huge JavaScript attack surface has ALWAYS been there. JavaScript runs in a really dynamic environment and everything from XSS-onwards has been fundamentally due to why you can do with the environment.
If you remember “mashups” these were basically just using the fact that you can load any code from any remote server and run it alongside your code and code from other servers while sharing credentials between all of them. But hey it is very useful to let Stripe run their stripe.js on your domain. And AdSense. And Mixpanel. And while we are at it let’s let npm install 1000 packages for a single dependency project. It’s bad.
This is only really fine as long as you have extremely clearly, well defined actions. You need to verify that the request is sane, well-formed, and makes sense for the current context, at the very least.
As I understand it, RSC is locating the code to run by name, where the name is supplied by the client.
JS/Node can do this via import() or require().
C, C++, Go, etc can dynamically load plugins, and I would hope that people are careful when doing this when client-supplied data. There is a long history of vulnerabilities when dlopen and dlfcn are used unwisely, and Windows’s LoadLibrary has historical design errors that made it almost impossible to use safely.
Java finds code by name when deserializing objects, and Android has been pwned over and over as a result. Apple did the same thing in ObjC with similar results.
The moral is simple: NEVER use a language’s native module loader to load a module or call a function when the module name or function name comes from an untrusted source, regardless of how well you think you’ve sanitized it. ALWAYS use an explicit configuration that maps client inputs to code that it is permissible to load and call. The actual thing that is dynamically loaded should be a string literal or similar.
I have a boring Python server I’ve maintained for years. It routes requests to modules, and the core is an extremely boring map from route name to the module that gets loaded and the function that gets called.
> We can patch hasOwnProperty and tighten the deserializer, but there is deeper issue. React never really acknowledged that it was building an RPC layer. If you look at actual RPC frameworks like gPRC or even old school SOAP, they all start with schemas, explicit service definitions and a bunch of tooling to prevent boundary confusion. React went the opposite way: the API surface is whatever your bundler can see, and the endpoint is whatever the client asks for.
> My guess is this won't be the last time we see security fallout from that design choice. Not because React is sloppy, but because it’s trying to solve a problem category that traditionally requires explicitness, not magic.
Now I'm worried, but I don't use React. So I will have to ask: how does SvelteKit fares in this aspect?
The vast majority of developers do not update their frameworks to the latest version so this is something that will linger on for years. Particularly if you're on Next something-like-12 and there's breaking changes in order to go to 16 + patch.
OTOH this is great news for bad actors and pentesters.
Negative posts outperform because they create unfinished cognitive work. A clean, agreeable story closes the loop, a contested claim or engagement opens and follows the open loop.
reply