Hacker Newsnew | past | comments | ask | show | jobs | submit | ttul's commentslogin

My old late friend Dan Kaminsky famously wrote the Perl module "Ozyman DNS", which allowed you to tunnel ssh session over the DNS, thus evading certain firewalls such as those controlling access to public WiFi. Modern public WiFi setups filter the DNS too, rendering this technique moot, but I remember using "Ozyman DNS" to get WiFi access on the Caltrain and that was highly satisfying.

https://boingboing.net/2004/06/21/tunneling-ssh-over-d.html


Yeah. Most of my public repos have 0 stars. Most of what I write sucks.

GitHub Stars (or any online 'star count') is not an indicator of quality.

> not an indicator of quality

I mean, it’s an indicator. Just not a definitive—or individually sufficient—one.


Stars occasionally correlate with quality but more often it's timing and naming. I have a total of 40k stars on GitHub, and I know the code is shit in most of those repos (many written back when I was 16-18 as I was just learning to code). Jumping on hype trains before they start is how you get stars.

Yeah, but knowing something sucks means you are probably reasonably competent at coding. =3

https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect


Even if you're not correct, I respect your positivity and constructive attitude

It's good to raise people's expectations of themselves


Self-reported studies are arguably weaker evidence, but are common in some areas for ethics reasons. In general, if errors are truly random, than they will cancel out over larger/frequent population samples.

The study conclusion inferred the skills needed to be effective at some task, are the same skills needed to correctly evaluate if you are actually proficient at the same tasks.

https://arxiv.org/abs/2505.02151

If the data infers another explanation is more applicable, than I'd be interested in the primary papers/studies the editorialized opinion seems to have omitted. =3


No it doesn't. The people with the lowest self perception also have the lowest actual skill. Look at the chart:

https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect#...


I guess you linking to it was a self fulfilling prophecy

If you read your own reference (not the picture, but where you took it from on Wikipedia) really really carefully, you might be able to tell why it so perfectly applies to you

The person with little knowledge overestimates they're capability, and the person which actually knows how complicated [the thing] is , usually isn't as confident they mastered it.

Your take on that makes absolutely no sense


You’re talking about a confidence and ability gap. I have heard of the Dunning-Kruger effect. I accept all of that.

But the claim above was that having low confidence was correlated to higher skill. Ie, skill and confidence are anti correlated. The chart does not show that. The lowest data point for confidence is the point on the left of the chart. This is also the data point corresponding to people who have the least competence. Having low confidence is not evidence that you’re secretly an expert. Confidence and competence are still positively correlated according to that chart.

The Dunning-Kruger effect is not so strong that there are scores of novices convinced they are experts in a field. But in your case, I admit the data may not tell the full story.


"Nor would a wise man, seeing that he was in a hole, go to work and blindly dig it deeper..." ( The Washington Post dated 25 October 1911 )

"Baloney Detection Kit"

https://www.youtube.com/watch?v=aNSHZG9blQQ

Best regards =3


That isn't what that shows, and the article you linked to even warns:

> In popular culture, the Dunning–Kruger effect is sometimes misunderstood as claiming that people with low intelligence are generally overconfident, instead of denoting specific overconfidence of people unskilled at particular areas.

Dunning-Kruger has also been discredited with suggestion they may have been over confident themselves:

The Dunning-Kruger Effect Is Probably Not Real (2020) https://www.mcgill.ca/oss/article/critical-thinking/dunning-...

Debunking the Dunning‑Kruger effect – the least skilled people know how much they don’t know, but everyone thinks they are better than average (2023) https://theconversation.com/debunking-the-dunning-kruger-eff... the Dunning‑Kruger effect – the least skilled people know how much they don’t know, but everyone thinks they are better than average


Are you replying to the wrong comment? The person you're responding to seems to make the same point

Self-reported studies are arguably weaker evidence, but are common in some areas for ethics reasons. In general, if errors are truly random, than they will cancel out over larger/frequent population samples.

The study conclusion inferred the skills needed to be effective at some task, are the same skills needed to correctly evaluate if you are actually proficient at the same tasks.

Or put another way, the <5% population of narcissists by their nature become evasive when their egos are perceived as threatened. Thus, often will pose a challenge in a team setting, as compulsive lying or LLM turd-polishing is orthogonal to most real world tasks.

People are not as unique as they like to believe, and spotting problems is trivial after you meet around 3000 people. Best to avoid the nonsense, and get outside to enjoy life. Have a great day =3


No idea why we all get negative karma on this thread, as I do respect a cited source opinion even if we disagree. Do have a look around for papers rather than editorialized content in the future, and note account LLM agent output is a violation of YC usage policy. Have a great day =3

https://arxiv.org/abs/2505.02151


Doesn’t matter if the recruiter doesn’t call you back because you’re not a 1000x engineer.

Why would anyone settle for underpaid positions from an agency taking a 7% contract cut, and purging CVs from any external firm also contracting with their services.

Most people figure out this scam very early in life, but some cling to terrible jobs for unfathomable reasons. =3


> Why would anyone settle for

The answer to such questions is always that, given their circumstances, they have no realistic choice not to.

This is very obvious, and it's frustrating to continually see people pretend otherwise.


> they have no realistic choice not to

If folks expect someone to solve problems for them, than 100% people end up unhappy. The old idea of loyalty buying a 30 year career with vertical movement died sometime in the 1990s.

Ikigai chart will help narrow down why people are unhappy:

https://stevelegler.com/2019/02/16/ikigai-a-four-circle-mode...

Even if folks are not thinking about doing a project, I still highly recommend this crash course in small business contracts

https://www.youtube.com/watch?v=jVkLVRt6c1U

Rule #24: The lawyers Strategic Truth is to never lie, but also avoid voluntarily disclosing information that may help opponents.

Best of luck =3


> If folks expect someone to solve problems for them

In this type of situation, the fundamental issue is that making progress depends on many people acting in unison to increase their bargaining power, which is (a) hard to arrange even if everyone who acted this way would benefit, and (b) actually may be detrimental to some people (usually the high performers).


I agree it is nearly impossible to alter the inertia of existing firms. Most have entrenched process people that defend how things are done right up until a company enters insolvency. Fine if you sell soda or rubber tires, but a death knell for technology or media firms.

In my observations it is usually conditioned fear, personal debt-driven risk aversion, and or failure to even ask if the department above you is really necessary. These days, it is almost always easier to go to another firm if you want a promotion. =3


+1 star for ttul

I hate to say it, but this looks like the sort of thing a CEO told their team to build on Monday morning in a panic because they are grasping for ways to participate in the AI craze. And the team did just that: they built it that morning using Claude Code.

There is truly nothing original here and the product doesn't have a chance in hell of earning money. Local LLMs on-device will be dominated by the device vendors, whose control of the hardware stack combined with their ability to subsidize billions of dollars of machine learning research gives them an unfair advantage. Apple knows what the next generation of silicon will deliver, and their ML engineers are already hard at work building models that will be highly optimized for that silicon a year or two ahead of time. Open source models are really great and are backed by well funded labs; however, delivering these models on-device in a way that pleases users will never be easier than it is for the vendors of the devices.

Plus, device vendors have ways of making money from local LLMs that third-party app providers do not. They can make their local LLM free and earn money on the hardware play, without skipping a beat on the billions of dollars of ongoing R&D. I don't see how third party app vendors make money here when they will be competing with the decent, totally free alternative that Apple and Google (and Samsung etc.) will load on in the next year or two.


To be fair I've followed ente for a while and they seem to let their teams have side projects if it falls in line with the overall ente ethos.

Same with Kagi. Thats where Kagi news was born.

I quite like the ethos, but this Ensu definitely seems underbaked.


I do find it quite funny how unless you're using one of the frontier labs interfaces, more or less all other 'model providers' are using small models that work on a macbook. Proton did something similar - I've tried their version and I find it pretty awful relative to just running Qwen 3.5 locally.

Wanted to share a message here with the CEO not to feel too bad because little is more common than getting caught by this tech

But where are they! https://ente.com/about

Small team, rooting for them


I write my comment with admiration for founders, because I am one. That being said, chasing trends without paying attention to the steamroller has killed more than one very good company and I have plenty of scar tissue as evidence...

You would think so but Apple certainly hasn’t managed it yet.

Counter position (not sure it's better than yours): what are the chances that device makers would actually offer seriously local, and not just something that does work in airplane mode, but then still connects to their cloud later, if not for post-sale monetization then at least for features providing better brand lock-in? I mean just look at how well the market for TV sets that don't try to shove "services" down buyers' throats is developing...

But sure, making money with standalone "local first is our headline feature" will be incredibly hard against those, no doubt about that. In light of the limited quality of what local models can achieve, the privacy bonus just won't compel many to pay. But that only means that this "morning with Claude" you are suggesting might be just the right amount of investment for the result you'd realistically expect. But is that so bad? I'd argue the reverse: bundling up the low hanging fruit but not by some hobbyist who will lose interest two weeks on but by a company big enough the keep it going while small enough to not be a VC furnace that will inevitably turn on users once the runway runs out (*), that's an opportunity to fill a niche few others can. Valuable for users who don't want to roll their own deployment of open source models (can't, or unwilling to commit to keeping them up to date, assuming that Ente does keep that ball rolling), and also valuable for the company of the investment actually is so low that it pays by raising awareness for their other products that apparently do earn them money.

(*) I was googling around a little wondering if they actually are as close to bootstrapped as they seem on the surface, and yes, that's supposedly the core idea [0], but despite that they also took 100 kUSD in "non-diluting" (basically a gift then?) from Mozilla with the explicit goal "to promote independent AI and machine learning" [1]. So not a CEO whim but following up to a promise made earlier. If they actually did avoid spending all that money on a one-off but went smaller planning to keep it current for a longer time horizon, I'd congratulate them on an excellent choice.

[0] https://ente.com/blog/5-years-of-ente/

[1] https://ente.io/blog/mozilla-builders/

The hn discussion for [1] seems to be completely missing the point, that Mozilla program isn't about funding an image host (yeah, I'd also prefer if Mozilla focused on the Browser and perhaps Thunderbird, but the foundation is what it is): https://news.ycombinator.com/item?id=41681666


Tokens consumed: 1.6 billion Estimated token cost: $1,591

Wow.


I agree with you, mostly. My read is that Twelver Shi’ism is not a unified hierarchy, and a marja’s fatwa normally binds that marja’s own followers rather than all Shi’a, so your institutional point is broadly right.[1][2] It is too strong, though, to say the anti-nuclear position was simply “invented for PR”: Khamenei did publicly describe it as a real fatwa.[3] At the same time, Iran’s enrichment posture _does_ fit the description of a threshold state, with large stocks of uranium enriched to 60%, so it is fair to say the ruling also had strategic and diplomatic value.[4]

The parts I would soften are the specific claim about Sistani having a significant following inside the IRGC, which MIGHT be true but is much harder to substantiate publicly (although, maybe you have some behind-the-scenes knowledge?), and the certainty of motive. Still, your last sentence is basically right: these rulings are not _immutable_. After Ali Khamenei’s death, Iran’s foreign minister said (quoting the Reuters article), “fatwas depend on the Islamic jurist issuing them,” and added he was “not yet in a position to judge the jurisprudential or political views of Mojtaba Khamenei…” This reinforces the point that doctrine can shift if the leadership chooses.[5]

[1] Encyclopaedia Britannica, “Twelver Shi’ah.”

[2] Al-Islam.org, “Question 49: Difference between hukm and fatwa.” [3] Leader.ir, “Ayatollah Khamenei in the Eid al-Fitr congregational prayers” and “Leader’s remarks on anti-Iran sanctions and Yemen aggressions by Saudi Arabia.”

[4] Arms Control Association, “The Status of Iran’s Nuclear Program,” and ACA analysis citing the IAEA’s 440.9 kg figure.

[5] Reuters, “Iran says nuclear doctrine unlikely to change, Hormuz Strait needs new protocol” (March 18, 2026).


Strategically, it seems like a dumb move. Right now, Congress is unlikely to approve Trump’s request for $200B to fund the war effort. But if Americans can be convinced that Iran could somehow hit American cities, they would call their members of Congress in a heartbeat and that money would presumably flow without interruption.

Why time the medium range missiles now? It seems like yet another own-goal for this desperate and poorly coordinated regime.


I can't speak for Iran, but it may be a warning against attempting to land troops on Kharg Island. They're showing that they've been "nice" so far, but they have escalation paths America may not have considered. I think most people thought they were limited to short range missile strikes.

Or the US could just stop bombing Iran? Then there would be no reason for Iran to attack American cities.

Yeah, that would be nice. I'm worried this will continue to escalate.

You and 97% of the globe.

Americans can be convinced of anything without too much effort so that isn’t really a factor here.

They just don't need to be convinced of anything. It's not like normal people have a say in this, just a few leaders doing what they want. A few fake news stories saying that there's so much support.

They certainly are. And this is likely to some degree a response to enterprise security desires. Enterprise endpoints are locked down already - no need for extra external API security if it’s just the user’s desktop communication as usual.

I feel like this is absolutely not the case. Our corporate infosec guys are freaking out, as developers and general users alike are finding all new ways to poke holes in literally everything.

We're finding out quickly that enterprise endpoints are not locked down anywhere near enough, and the stuff that users are creating on the local endpoints is quickly outpacing the rate at which SOC teams can investigate what's going on.

If you're using Claude via Anthropic's SaaS service it's near impossible to collect logs of what actually happened in a user's session. We happen to proxy Claude Code usage through Amazon Bedrock and the Bedrock logs have already proven to be instrumental in figuring out what led a user to having repeated attempts to install software that they wouldn't have otherwise attempted to install - all because they turned their brains off and started accepting every Claude Code prompt to install random stuff.

Sandboxing works to an extent, but it's a really difficult balance to strike between locking it down so much that you neuter the tool and having a reasonable security policy.


> If you're using Claude via Anthropic's SaaS service it's near impossible to collect logs of what actually happened in a user's session.

If you are big into logs, OpenAI might be more your speed. They've got an extremely good logging UI in their platform web app. I use it all the time to figure out what the hell copilot was thinking.


Oh so much this, in a sense.

Look, as a software dev myself, I really like that my company lets us use our computers the way we see fit. Pre- or post-AI with no restrictive lockdown. Been there, hated that.

But I totally get the freaking out over "normal devs". The amount of stuff most people think is reasonable, AI or not, is mind boggling. For myself of course I like to just be able to be responsible myself. But as a security team I'd also be freaking out.

Like, the amount of people that find our super boring, totally corporate "security training videos", helpful and insightful and "oh dang I'd never have thought of that!" is mind boggling all by itself. Never mind any actual security training that'd be useful to someone with half a brain. You can literally just click through the 8+ hours of stuff you're supposed to watch / answer / do in 30 minutes.


This is so true. I don’t like Telegram for a host of reasons, but the bot architecture is second to none. Try creating a bot in Slack. You’ll pull your hair out for hours. Same goes for Discord. Utter nightmare. Telegram? You send a DM and it is basically done.

Discord webhooks aren’t too bad… but the proper bot thing is ridiculous. They really lack a development mode server, having to know everything about oauth and token permissions before even starting is bonkers and why do I even need an app is beyond me. I’d probably have my bot completely implemented in telegram in the same time I figured out what an app is in discord and how to even add a new app to my server.

There’s always room for improvement…

Case in point: my parents. Built a house in 1988 and they still live there. Two people in 3500 square feet. Four bathrooms and five bedrooms. Meanwhile, you need a family income of 3x the median to rent a townhouse 1/3rd the size nearby.

This is beyond ridiculous and it’s totally unsustainable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: