The much more likely culprit is your VPN server's port. If it's running on some no-name port (such as the default 51820), that's likely to get throttled.
I'd bet that switching your VPN server port to 443 would solve the problem, since HTTP/3 runs on 443/udp.
> ... or how it's supposed to make any programmer worth their weight in salt 10x better.
It doesn't. The only people I've seen claim such speedups are either not generally fluent in programming or stand to benefit financially from reinforcing this meme.
For every conspicuous vibecoding influencer there are a bunch of experienced software engineers using them to get things done. The newest generation of models are actually pretty decent at following instructions and using existing code as a template. Building line-of-business apps is much quicker with Claude Code because once you've nicely scaffolded everything you can just tell it to build stuff and it'll do so the same way you would have in a fraction of the time. You can also use it to research alternatives to architectural approaches and tooling that you come up with so that you don't paint yourself into a corner by having not heard about some semi-niche tool that fits your use case perfectly.
Of course I wouldn't use an LLM to #yolo some Next.js monstrosity with a flavor-of-the-week ORM and random Tailwind. I have, however, had it build numerous parts of my apps after telling it all about the mise targets and tests and architecture of the code that I came up with up front. In a way it vindicates my approach to software engineering because it's able to use the tools available to it to (reasonably) ensure correctness before it says it's done.
Just the other day ChatGPT implemented something that would have taken me a week of research to figure out: in 10 minutes. What do you call that speedup? It's a lot more than 10x.
On other days I barely touch AI because I can write easy code faster than I can write prompts for easy code, though the autocomplete definitely helps me type faster.
The "10x" is just a placeholder for averaging over a series of stochastic exponents. It's a way of saying "somewhere between 1 and infinity"
> Just the other day ChatGPT implemented something that would have taken me a week of research to figure out: in 10 minutes. What do you call that speedup? It's a lot more than 10x.
Can you share what exactly this was? Perhaps I don't do anything exciting or challenging, but personally this hasn't happened to me so I find it hard to imagine what this could be.
Instead of AI companies talking about their products, I think the thing to really sell it for me would be an 8 hour long video of an extremely proficient programmer using AI to build something that would have taken them a very long time if they were unassisted.
Sure. I needed to draw some parametric and smooth Bézier curves. LLMs are beasts at figuring out the appropriate equations. It would have taken me forever to work out where all the control points should go.
I am a professional engineer with around 10 years of experience and I use AI to work about 5x faster on a site I personally maintain (~100 DAU, so not huge, but also not nothing). I don’t work in AI so I get no financial benefit by “reinforcing this meme”.
Same position, different results. I'm maybe 20% faster. Writing the code is rarely the bottleneck for me, so there's limited potential in that way. When I am writing the code, things that I'd find easy and fast are a little faster (or I can leave AI doing them). Things that are hard and slow are nearly as hard and nearly as slow when using AI, I still need to maintain most of the code in my head that I'd need to without AI, because it'll get things wrong so quickly.
I think what you're working on has a huge impact on AI's usability. If you're working on things that are simple conceptually and simple to implement, AI will do very well (including handling edge cases). If it's a hard concept, but simple execution, you can use AI to only do the execution and still get a pretty good speed boost, but not transformational. If it's a hard concept and a hard execution (as my latest project has been), then AI is really just not very good at it.
Oh, well if it can generate some simple code for your personal website, surely it can also be the "next level of abstraction" for the entirety of software engineering.
Well, I don’t really think it’s “simple”. The code uses React, nodejs, realtime events pushed via SSE, infra pushed via Terraform, postgres, blob store on S3, emails send with SES… sure, it’s not the next Google, but it’s a bit above, like, a personal blog.
And in any case, you are moving goalposts. OP said he had never seen anyone serious claim that they got productivity gains from AI. When I claim that, you say “well it’s not the next level of abstraction for all SWE”. Obviously - I never claimed that?
If you want my opinion, I think LLMs can be pretty good at generating simple code for things you can find on stackoverflow and require minor adjustments. Even then, if you don't really understand the code you can have major issues.
Your site is case in point of why LLMs demo well but kind of fall apart in the real world. It's pretty good at fitting lego blocks together based on a ton of work other people have put into React and node or the SSE library you used, etc. But that's not what Karpathy is saying, he's saying "the hottest programming language is english".
That's bonkers. In my experience it can actually slow you down as much as speed you up, and when you try to do more complicated things it falls apart.
Practically every post on HN that mentions AI now ends up with a thread that is "I get 100X speed-up using LLMs" vs. "It made me slower and I've never met a single person in real life who has worked faster with AI."
I'm a half-decent developer with 40 years experience. AI regularly gives me somewhere in the range of 10-100X speed-up of development. I don't benefit from a meme, I do benefit from better code delivered faster.
Sometimes AI is a piece of crap and I work at 0.5X for an hour flogging a dead horse. But those are rarer these days.
I've posted this on another comment verbatim that was similar to yours, so apologies for the copy and paste:
Can you share what exactly this was (that got you the 10-100x speedup)? Perhaps I don't do anything exciting or challenging, but personally this hasn't happened to me so I find it hard to imagine what this could be.
Instead of AI companies talking about their products, I think the thing to really sell it for me would be an 8 hour long video of an extremely proficient programmer using AI to build something that would have taken them a very long time if they were unassisted.
I would love to make these videos for you if you want to pay for my time. Drop me an email at josh.d.griffith at gmail and tell me what you want to see and compensate. I can vibe code at any scale.
That's the thing - I know what 'vibe coding' is because that's pretty much how I use AI, as an exploratory tool or interactive documentation or a search engine for topics I want surface level information about.
It does not make me a 10x-100x more efficient. It's a toy and a learning tool. It could be replaced or removed and I wouldn't miss it that much.
Clearly I am missing something. I care about quality software, so if it's making someone 100x more productive but their producing the same subpar nonsense they would anyway then I am not interested. Hence I want to see a really proficient programmer use it, be 10x+ more productive, and have a quality product at the end. That's what I want to see demonstrated.
I personally think that everyone knows AI produces subpar code, and that the infallible humans are just passing it along because they don't understand/care. We're starting to see the gaslighting now, it's not that AI makes you better, it's that AI makes you ship faster, and now shipping faster (with more bugs) is more important because "tech debt is an appreciating asset" in the world where AI tools can pump out features 10x faster (with the commensurate bugs/issues). We're entering the era of "move fast and break stuff" on steroids. I miss the era of software that worked.
Yep, bugs are already just another cost of doing business for companies that aren’t user-focused. We can expect buggier code from now on. Especially for software where the users aren’t the ones buying it.
Disclaimer because I sound pessimistic: I do use a lot of AI to write code.
I really wish we would shift back towards quality and reliability being major selling points in software. There's only a handful of projects I'm aware of that emphasize it and they're both pleasures to use: Obidian (note app) and Linear (ticket tracking)
Are you doing development? Is it just you two? What are they doing during this time, aside from having originally provided capital? Are they smart money? Do you need them aside from their money? Do they have connections, a network, or something else compelling they bring to the table? If you're sure you're going to raise a seed you could give them a larger cut of that "not too far away" as you said.
To answer your questions more directly:
> 1. Is it reasonable for a Technical Founder to take a Median HCOL salary (due to strict financial constraints) while the Non-technical Founder takes €0?
> 2. Does the "Fairness" argument (matching salaries) trump the "Runway" argument (survival) in early-stage startups?
> 3. Did they waive the right to claim "unfairness" by joining the partnership with full prior knowledge of this asymmetry?
There is no "right" and "wrong" way. There is no "fair". It's whatever you can swing. You are partners, and if you want to keep it together you need to avoid building resentment, but it's also a business, and you need to bet on yourself. The "fairness" obsession is naive and it's exactly what slippery MBAs like to foist over devs who are zoomed in too much to value their time properly.
What's stopping you from getting a job and doing this part-time on your own, retaining all equity?
I’m doing all of the development and he’s handling product and business. It’s just the two of us. Now that we’re entering an accelerator, it’s basically all or nothing for both of us.
Maybe the better question is this: is it ethically acceptable to expect him to take 0 salary until seed without additional compensation, given that this is the only way we can realistically reach a seed round? In my mindset if he doesn't do this sacrifice he already loses 100% of everything.
This is cool, but we'd be naive to think the other side is not also learning from this operation. The "gotcha" questions that foiled them at the end will likely make it into their playbook for next go around, and these attacks are going to be more sophisticated.
Same; that would really improve the remote router scenario. I've had a router refuse to boot up after a power outage until I manually ran a disk check. I'd like to at least be able to force start-up no matter what, but journaling is the proper fix.
Agreed! I think the only other solution (at present) is to mount as much of the system read-only as possible to minimize the risk of needing to `fsck` after an unclean shutdown. That and putting it behind a UPS, but of course that only lasts so many hours.
I was forced to sign a full exit contract to avoid being sued.
"The above is our final offer. Please give us your reply no later than Wednesday this week. After confirmation, we will sign the formal agreement no later than Friday this week. If you do not accept, we will put the new round financing work on hold and immediately initiate a lawsuit against Allen regarding labor relations, shareholder qualifications and directors' fiduciary duties. We have sufficient evidence of Allen's violations of the labor contract, employee handbook and shareholder agreement, such as his absence from work, providing services to third parties and obtaining benefits during his employment, and sending the company's confidential information to third parties. We will also fully disclose such situations to the market."
Always have your own attorney look over paperwork. In what country are you?
It doesn't sound like you want to pursue it, but you could make a case that you signed under duress. Not knowing any specifics about your situation, personally, I would focus on moving on to the next thing.
On their tech support page [1], Google Fi is said to be resistant/immune to SIM swap attacks because the attacker needs physical access to your device and Google account. Yet earlier this year [2], the Google Fi hack said to have exposed Fi users to SIM swapping. Can anyone shed light on how this can happen without someone having your phone?
> Can anyone shed light on how this can happen without someone having your phone?
I do not know specific details of this particular incident but I would like to emphasize the fact that Google Fi, at least in the US, is a virtual network on top of the T-mobile's physical one. There is some extra level of security via obscurity that makes simple social engineering attacks harder but fundamentally it is still T-mobile underneath.
Think of it. You lost your phone and went to store and store employee or CS over the phone is able to issue you a SIM. Now the same employee takes bribe and give it to the hackers who use it to steal your fund
Implementation flaws like that are always possible, but my concern is that in so many cases, SIM swaps are ridiculously easy by design (or more accurately, by absence) of the phone provider's security procedures.
Issue is that FCC mandates a port out within 4 hours and stores don't make $$ while doing these so their goal is to get you out of the door ASAP so they can focus on the revenue. So that's why + bribe factor
We have one: the internet. No app review process. No single-store censorship. No payment fees. App stores are rent-seeking walled gardens which are becoming obsolete as the web gains more native/device APIs.
The ultimate filter is: do not visit the website. It comes down to the cost of having the freedom to do the filtering yourself vs. outsourcing that effort to a company sharing (some of) your values.
The latter locks you in since you are in no position to change the company's mind.
> It has not been shown conclusively that microwaves (or other non-ionizing electromagnetic radiation) have significant adverse biological effects at low levels. Some, but not all, studies suggest that long-term exposure may have a carcinogenic effect.
He might not be entirely wrong. They're still not going to sell any to the general public, though, especially if it is carcinogenic.