Hacker Newsnew | past | comments | ask | show | jobs | submit | coffeecoders's commentslogin

This suggests HN is functioning as designed. Votes signal agreement while comments surface disagreement.

Negative posts outperform because they create unfinished cognitive work. A clean, agreeable story closes the loop, a contested claim or engagement opens and follows the open loop.


I’ve been vibe-coding replacements for the tools I actually use every day. So far: a Notepad, a REST client, a snipping tool, a local gallery and lightweight notes for iPhone.

The motivation isn't novelty. It's control. I don't need ads, onboarding flows and popups, AI sidebars, bloated menus, unnecessary network calls , etc. A notepad should never touch the network. A REST client shouldn’t ship analytics or auto update itself mid-request.

No plugin system. No extensibility story. Just plain/simple software.

As I build these, I have been realizing how much cognitive overhead we’ve normalized in exchange for very little utility.


Starting from scratch or did you try to start from some open-source tools? I'm sure there are dozens of note-taking apps on f-droid for instance.

Bespoke apps that would normally be too time‑consuming to justify building are a perfect use case for LLMs. A few that I've built already.

- Pre-AI Image Search – find images on the web with upload dates prior to 2022

- HN Notifier – a Mac menu bar app that shows toast notifications for Hacker News submissions containing topics of interest to me

- Comic Display – serves CBR/CBZ archives or image folders over LAN so they can be read on mobile devices


I think this is a huge use case for vibe coding, and something I’m excited to do more in 2026.

There are so many apps I want, that companies are not incentivised to build for me.


big fan of this and have been doing similar. i just got to a good state with my Linear clone app. im planning to do a REST client soon, how'd that go for you?

Nice, Linear is a perfect target for this mindset.

The REST client went surprisingly smoothly once I committed to keeping it boring.

I'm building Mac apps in Xcode, and I keep multiple small apps in a single Xcode project with a few shared libraries (basic UI components, buttons, layout helpers, etc. to keep all my apps similar).

The REST client is literally just another target + essentially one extra file on top of that. No workspaces, no collections, no sync, no plugins. Just method, URL, headers, body, hit send, show response. Requests are saved and loaded as plain local JSON.

What surprised me is how little code is actually required once you strip away "product features".


I feel like being in a time loop. Every time a big company releases a model, we debate the definition of open source instead of asking what actually matters. Apple clearly wants the upside of academic credibility without giving away commercial optionality, which isn't unsurprising.

Additionally, we might need better categories. With software, flow is clear (source, build and binary) but with AI/ML, the actual source is an unshippable mix of data, infra and time, and weights can be both product and artifacts.


I'm glad you said it. Incredible tech and the top comment is debating licensing. The demos I've seen of this are incredible and it'll be great taking old photos (that weren't shot with a 'spatial' camera) and experiencing them in VR. I think it sums up the Apple approach to this stuff (actually impacting peoples lives in a positive way) vs the typically techie attitude.

> which isn't unsurprising

There has to be an easier combination of words for conveying the same thing.


I don't think it isn't unsurprising :)

Wait so you are surprised?

My prediction: 2026 looks normal.

AI stays the top story but in a boring way as novelty wears off and models get cheaper and faster (maybe even more embedded). No AGI moment. LLMs start feeling like databases or cloud compute.

No SpaceX or OpenAI IPO moment. Capital markets quietly reward the boring winners instead. S&P 500 grinds out another double digit year, mostly because earnings keep up and alternatives still look worse. Tech discourse stays apocalyptic, but balance sheets don't.

If you mute politics and social media noise, 2026 probably looks like one of those years that we later remember as "stable" in retrospect.

Bonus: Bitcoin sees both 50k and 150k.


> If you mute politics and social media noise, 2026 probably looks like one of those years that we later remember as "stable" in retrospect.

I love this, we focus way too much on the apparent chaos of daily life. Any news seems like a big wave that announces something bigger and we spend our time (especially here!) imagining the tsunami to come. Then later, we realize that most events are just unimportant to the point we forgot about them.


I'm not sure OpenAI can realistically afford not to IPO given its spending commitments.


To me, this is wishful thinking. The more I see these "our jobs are safe" claims, the more I fear our jobs are not safe, and people are just trying to convince themselves which is an indicator of turmoil ahead.


Who is “our”?


Tech folk. Anyone really.


What does "safe" mean? Unemployment in the US right now is under 5% which is historically very good (even though it's been slightly trending upwards over the past few months).


Keep in mind this is supporting the gif economy too. Lots of people are underemployed not necessarily unemployed because they start driving Uber for example instead of just wanting to sit at home after a job loss.


Employed. My contention is the AI is getting so good at doing tech related things that you'll need far fewer employees. I think Claude Code 4.5 is already there. Honestly, it just needs to permeate the market.


I agree that Claude Code is a lot more effective than I was expecting, but I don't think it can fully replace human software engineers, nor do I think it's on any trajectory to do so. It does make senior engineers a lot more productive so I could see it reducing some demand for new grad software engineers.


How many years will it need to study completed projects by senior engineers?


So, 2025 again, gotcha.


You’re better than most at tuning out geopolitical news if you found this year stable.


Predicting things won't change is typically the safe bet.


I've realized over time that I personally cannot learn from video at all. Even "great" lectures don't stick. Text does!

Being able to skim, jump around, re-read a paragraph or pause on a single sentence is how understanding actually forms for me.

What’s interesting is that LLMs lean hard into this strength of text, they make it interactive, searchable, and contextual.

To me, most of these platforms have optimized video for engagement. Its essentially "press play and hope it sticks".


The dream never dies, possibly because people remember when class time was supplanted by a movie. Anyone remember "I Am Joe's Heart"? Those movies showed that you could just sit and watch passively like TV, and you'd learn quite a bit, with professional diagrams and animations to help.

Yet your comment is true. Perhaps the difference is that science is inherently interesting because nature is confined to things that are consistent and make sense, while the latest security model for version 3.0 of this-or-that web service protocol, vs. version 2.0, is basically arbitrary and resists effective visual diagramming. Learning software (not computer science) is an exercise in memorizing things that aren't inherently interesting.


Charging by minute might push people toward shorter, noisier and more fragmented pipelines. It feels more like a lever to discourage selfhosting over time.

It's not outrageous money today, but it's a clear signal about where they want CI to live.


Us software engineers assume value comes from serving more people, faster, with less friction. But many of the things that actually make life feel coherent such as learning a craft, maintaining friendships and building tools for one person, only work because they’re slow and specific.

Tech doesn't give us the wrong desires but the easier versions of the right ones, and those end up hollow.


I have an Apple TV and I’ve been running iSponsorBlockTV [1] on my Synology box for a while. It auto-skips the sponsored segments and with Youtube premium, it gives me a clean, ad-free setup.

I can’t stand those in-video intros or sponsored promos, where I’m suddenly pitched a random VPN or productivity app.

[1]. https://github.com/dmunozv04/iSponsorBlockTV


This is good. It’s fascinating how it spins up interactive pages instantly. Some of the mini-apps actually feel useful, but others break in ways you wouldn’t expect.

I’m curious to see how it evolves with more complex, multi-step queries.


This vulnerability is basically the worst-case version of what people have been warning about since RSC/server actions were introduced.

The server was deserializing untrusted input from the client directly into module+export name lookups, and then invoking whatever the client asked for (without verifying that metadata.name was an own property).

    return moduleExports[metadata.name]

We can patch hasOwnProperty and tighten the deserializer, but there is deeper issue. React never really acknowledged that it was building an RPC layer. If you look at actual RPC frameworks like gPRC or even old school SOAP, they all start with schemas, explicit service definitions and a bunch of tooling to prevent boundary confusion. React went the opposite way: the API surface is whatever your bundler can see, and the endpoint is whatever the client asks for.

My guess is this won't be the last time we see security fallout from that design choice. Not because React is sloppy, but because it’s trying to solve a problem category that traditionally requires explicitness, not magic.


To me it just looks like unacceptable carelessness, not an indictment of the alleged "lack of explicitness" versus something like gRPC. Explicit schemas aren't going to help you if you're so careless that, right at the last moment, you allow untrusted user input to reference anything whatsoever in the server's name space.


But once that particular design decision is made it is a question of time before that happens. The one enables the other.

The fact that React embodies an RPC scheme in disguise is quite obvious if you look at the kind of functionality that is implemented, some of that simply can not be done any other way. But then you should own that decision and add all of the safeguards that such a mechanism requires, you can't bolt those on after the fact.


this

I always felt server-action had too much "magic"


All mistakes can be blamed to "carelessness". This doesn't change the fact that some designs are more error-prone and more unsafe.


The endpoint is not whatever the client asks for. It's marked specifically as exposed to the user with "use server". Of course the people who designed this recognize that this is designing an RPC system.

A similar bug could be introduced in the implementation of other RPC systems too. It's not entirely specific to this design.

(I contribute to React but not really on RSC.)


”use server” is not required for this vulnerability to be exploitable.


wait I'm only using React for SPA (no server rendering)

am I also vulnerable??????


Only if you are running a vulnerable version of Next.js server.


No, unless you run the React Server Component runtime on your server, which you wouldn't do with a SPA, you would just serve a static bundle.


so any package could declare some modules as “use server” and they’d be callable, whether the RSC server owner wanted them to or not? That seems less than ideal.


The vulnerability exists in the transport mechanism in affected versions. Default installs without custom code are also vulnerable even if they do not use any server components / server functions.


They were warned. I don't see how this can be characterized as anything but sloppy.


You can call anything, anytime, anywhere without restrictions or protection.

Imagine these dozens of people, working at Meta.

They sit at the table, they agree to call eval() and not think "what could go wrong"


Eval has been known to be super dangerous since before the internet grew up and went mainstream. It is so dangerous that to deploy stuff containing it should come with a large flashing warning whenever you run it.


Half of web map solutions rely on workers, which can't be easily loaded from 3rd party origins, so are loaded as blobs. loading worker from blob is effectively an eval.


The client sort of exists to have code injected into it though?


If you want to describe text mark-up as programming, then yes. But most people do not do that.


hmm isn't eval is used in figurative-sense here eh?

maybe you should get some sleep


No, their whole point is that what they are doing is the literal equivalent of calling eval. Whether that actually uses the word 'eval' or a function called 'eval' is besides the point.


> The server was deserializing untrusted input from the client directly into

If I had a dollar for every time a serious vulnerability that started like this was discovered in the last 30 years...


For the layperson, does this mean this approach and everything that doesn't use it is not secure?

Building a private, out of date repo doesn't seem great either.


Not quite. This isn’t saying React or Next.js are fundamentally insecure in general.

The problem is this specific "call whatever server code the client asks" pattern. Traditional APIs with defined endpoints don’t have that issue.


I’m not asking if it’s fundamentally insecure.

Architecturally there appears to be an increasingly insecure attack surface appearing in JavaScript at large, based on the insecurities in mandatory dependencies.

If the foundation and dependencies of react has vulnerabilities, react will have security issues indirectly and directly.

This explicit issue seems to be a head scratcher. How could something so basic exist for so long?

Again I ask about react and next.js from their perspective or position of leadership in the JavaScript ecosystem. I don’t think this is a standard anyone wants.

Could there be code reviews created for LLMs to search for issues once discovered in code?


To be fair, the huge JavaScript attack surface has ALWAYS been there. JavaScript runs in a really dynamic environment and everything from XSS-onwards has been fundamentally due to why you can do with the environment.

If you remember “mashups” these were basically just using the fact that you can load any code from any remote server and run it alongside your code and code from other servers while sharing credentials between all of them. But hey it is very useful to let Stripe run their stripe.js on your domain. And AdSense. And Mixpanel. And while we are at it let’s let npm install 1000 packages for a single dependency project. It’s bad.


You mean call whatever server action the client asks? I don't think having this vulnerability was intentional.


This is only really fine as long as you have extremely clearly, well defined actions. You need to verify that the request is sane, well-formed, and makes sense for the current context, at the very least.


You would probably need to do the same if you were writing back-end in Go or something. I don't see how that is conceptually different.


As I understand it, RSC is locating the code to run by name, where the name is supplied by the client.

JS/Node can do this via import() or require().

C, C++, Go, etc can dynamically load plugins, and I would hope that people are careful when doing this when client-supplied data. There is a long history of vulnerabilities when dlopen and dlfcn are used unwisely, and Windows’s LoadLibrary has historical design errors that made it almost impossible to use safely.

Java finds code by name when deserializing objects, and Android has been pwned over and over as a result. Apple did the same thing in ObjC with similar results.

The moral is simple: NEVER use a language’s native module loader to load a module or call a function when the module name or function name comes from an untrusted source, regardless of how well you think you’ve sanitized it. ALWAYS use an explicit configuration that maps client inputs to code that it is permissible to load and call. The actual thing that is dynamically loaded should be a string literal or similar.

I have a boring Python server I’ve maintained for years. It routes requests to modules, and the core is an extremely boring map from route name to the module that gets loaded and the function that gets called.


I don’t think I’ve heard of intentional vulnerabilities?


Log4j almost seemed like it


Seems subjective and a personal interpretation.


I mean yeah


xz?


> We can patch hasOwnProperty and tighten the deserializer, but there is deeper issue. React never really acknowledged that it was building an RPC layer. If you look at actual RPC frameworks like gPRC or even old school SOAP, they all start with schemas, explicit service definitions and a bunch of tooling to prevent boundary confusion. React went the opposite way: the API surface is whatever your bundler can see, and the endpoint is whatever the client asks for.

> My guess is this won't be the last time we see security fallout from that design choice. Not because React is sloppy, but because it’s trying to solve a problem category that traditionally requires explicitness, not magic.

Now I'm worried, but I don't use React. So I will have to ask: how does SvelteKit fares in this aspect?


Indeed this is pretty bad.

The vast majority of developers do not update their frameworks to the latest version so this is something that will linger on for years. Particularly if you're on Next something-like-12 and there's breaking changes in order to go to 16 + patch.

OTOH this is great news for bad actors and pentesters.


This doesn't affect Next 12. Every single minor version of Next that's affected has a patch in the corresponding minor release cycle: https://nextjs.org/blog/CVE-2025-66478#fixed-versions


Just like the old days of PHP servers exposing their source code


How do hackers exploit it? Can I test it on my site?


> it’s trying to solve a problem category that traditionally requires explicitness, not magic.

i've been thinking basically this for so long, i'm kinda happy to be validated about this lol


while(true){

  console.log("jsjs")

}


Yghhhvv


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: