Hacker Newsnew | past | comments | ask | show | jobs | submit | redox99's commentslogin

It sounds harsh but you're most likely using it wrong.

1) Have an AGENTS.md that describes not just the project structure, but also the product and business (what does it do, who is it for, etc). People expect LLMs to read a snippet of code and be as good as an employee who has implicit understanding of the whole business. You must give it all that information. Tell it to use good practices (DRY, KISS, etc). Add patterns it should use or avoid as you go.

2) It must have source access to anything it interacts with. Use Monorepo, Workspaces, etc.

3) Most important of all, everything must be setup so the agent can iterate, test and validate it's changes. It will make mistakes all the time, just like a human does (even basic syntax errors), but it will iterate and end up on a good solution. It's incorrect to assume it will make perfect code blindly without building, linting, testing, and iterating on it. No human would either. The LLM should be able to determine if a task was completed successfully or not.

4) It is not expected to always one shot perfect code. If you value quality, you will glance at it, and sometimes ahve to reply to make it this other way, extract this, refactor that. Having said that, you shouldn't need to write a single line of code (I haven't for months).

Using LLMs correctly allow you to complete tasks in minutes that would take hours, days, or even weeks, with higher quality and less errors.

Use Opus 4.5 with other LLMs as a fallback when Opus is being dumb.


> Most important of all, everything must be setup so the agent can iterate, test and validate it's changes.

This was the biggest unlock for me. When I received a bug report I have the LLM tell me where it thinks the source of the bug is located, write a test that triggers the bug/fails, design a fix, finally implement the fix and repeat. I'm routinely surprised how good it is at doing this, and the speed with which it works. So even if I have to manually tweak a few things, I've moved much faster than without the LLM.


"The LLM should be able to determine if a task was completed successfully or not."

Writing logic that verifies something complex requires basically solving the problem entirely already.


Situation A) Model writes a new endpoint and that's it

Situation B) Model writes a new endpoint, runs lint and build, adds e2e tests with sample data and runs them.

Did situation B mathematically prove the code is correct? No. But the odds the code is correct increases enormously. You see all the time how the Agent finds errors at any of those steps and fixes them, that otherwise would have slipped by.


LLM generated tests in my experience are really poor

Doesn't change the fact that what I mentioned greatly improves agent accuracy.

AI-generated implementation with AI-generated tests left me with some of the worst code I've witnessed in my life. Many of the passing tests it generated were tautologies (i.e. they would never fail even if behavior was incorrect).

When the tests failed the agent tended to change the (previously correct) test making it pass but functionally incorrect, or it "wisely" concluded that both the implementation and the test are correct but that there are external factors making the test fail (there weren't).

It behaved much like a really naive junior.


Which coding agent and which model?


Actually it borderline undermines it because it's shit building upon shit

Him being a hypocrite doesn't make him wrong

Do as I say, not as I do.


Last year they claimed they had $800k in ARR from sponsors alone[1]. Add to that whatever they made by selling Tailwind Plus ($299 individual / $979 teams one time payment)

How much money do you really need to maintain a CSS library? I understand everyone wants a really fancy office in an expensive city, lots of employees with very high salaries and generous perks, and so on. But all that is not needed to maintain a CSS library (that is kind of feature complete already).

I think Tailwind was making a lot of money (surely over a million), expanded and got bloated unnecessarily just because they had all that money, and now that their income dropped to what still is a lot of money for a CSS library, they're angry that they have to cut expenses to a more reasonable level.

I guess it worked out for them because now they have even more sponsoring.

And they used the AI bad get out of jail free card when a lot of their drop in sales probably comes from shadcn/ui and others which offer something similar for free.

[1] https://petersuhm.com/posts/2025/


How much money do you really need to maintain a CSS library?

If you want to continue to develop new versions, you need enough to pay as many engineers as you need to do that. If you're not developing new versions then the money from sponsors will eventually stop.

And they used the AI bad get out of jail free card when a lot of their drop in sales probably comes from shadcn/ui and others which offer something similar for free.

shadcn is built on top of Tailwind. If Tailwind dies, so does shadcn.


If Tailwind dies, the CSS classes stay the same. What maintenance do they need that can’t be folded into another project?

> shadcn is built on top of Tailwind. If Tailwind dies, so does shadcn.

They can fork tailwind into openwind and keep using the stable version for a looong time with minor fixes.

And that would probably benefit shadcn somewhat since they would have more control.


And how would you adjust Shadcn salaries to account for this additional work? Do we expect open source labour to be subsidised by maintainers while the rest of us find work at FAANG?

How much work are we talking?

It would be in their best interest to keep "openwind" stable since changes to the CSS lib would require extra work in their component.

Different incentives.


Enough for multiple full time jobs. They've laid off staff who handled tasks they can no longer afford to pay for.

Is keeping both stable in their best interest or yours?

The set of options includes choosing to not keep anything stable. They can abandon both and go do other things. If the market wants them to keep x alive, it can offer a premium.


We'll have to agree to disagree then.

Because to me Tailwind maintenance look like a 2 devs jobs at best.

They have 3 founders. They don't even need to hire.


This seems kinda circular: they need to release new versions to pay developers. They need to pay developers to create new versions.

I hope they have better reasons to release new versions? Not releasing new versions also has its charm: less churn.


> How much money do you really need to maintain a CSS library?

Seems to me like Tailwind is a relatively complex beast covering a lot of ground, not to mention that web browsers are living/evergreen projects that are costantly moving forward, and so the lib needs frequent updates. I don't think you can avoid this (just by the nature of the project). You also need to be a css expert who follows the browser and feature development closely on top of having an excellent grasp of js/ts and the build (lightining css, vite...) ecosystem. I mean ... A few excellent engineers and a designer is probably just the bare minimum to keep Tailwind maintained.


If browsers are breaking old CSS, making new releases necessary, then that seems like a bad situation. I thought browsers were good at maintaining backward compatibility? Not so for Tailwind?

I mean just go over v4.x.x release changelogs [0].

The "web platform" is evolving at a decent pace in general [1][2]. You can sometimes do the same thing in 50 different ways (thanks to the breadth of css features and js apis and backwards compatibility), but there may be a much more elegant and robust solution on the horizon and when it hits the baseline, chances are it would likely lead to a simpler framework codebase and/or shrinked output if integrated... and therefore such a feature should be integrated. Now do this a zillion times over the life of the project. You have to keep up.

Less hacks, less code, smaller outputs.

And THEN you have all the bug reports and new feature requests.

And THEN you're supposed to work on something built on top of Tailwind that you can actually sell so you have something to eat tomorrow.

[0]: https://github.com/tailwindlabs/tailwindcss/releases

[1]: https://web.dev/blog

[2]: https://developer.chrome.com/new


If the old way didn't break, it's not true that you have to change it. You can ignore the new stuff if you want to.

The biggest food related problem in the US is obesity. Lean meat is very high satiety and really helps with keeping weight in check. Of course a McDonalds meal is the opposite and you eat more than half your day's calories in a few minutes.

Hardware would catch up. And IPv4 would never go away. If you connect to 1.1.1.1 it would still be good ole IPv4. You would only have in addition the option to connect to 1.1.1.1.1.1.1.2 if the entire chain supports it. And if not, it could still be worked around through software with proxies and NAT.

So... just a less ambitious IPv6 that would still require dual-stack networking setups? The current adoption woes would've happened regardless, unless someone comes up with a genius idea that doesn't require any configuration/code changes.

I disagree. The current adoption woes are exactly because IPv6 is so different from IPv4. Everyone who tries it out learns the hard way that most of what they know from IPv4 doesn't apply. A less ambitious IPv4 is exactly what we need in order to make any progress

It’s not _that_ different. Larger address space, more emphasis on multicast for some basic functions. If you understand those functions in IPv4, learning IPv6 is very straightforward. There’s some footguns once you get to enterprise scale deployments but that’s just as true of IPv4.

Lol! IPv4 uses zero multicast (I know, I know, technically there's multicast, but we all just understand broadcast). The parts of an IPv4 address and their meaning have almost no correlation to the parts of an IPv6 address and their meaning. Those are pretty fundamental differences.

IP addresses in both protocols are just a sequence of bits. Combined with a subnet mask (or prefix length, the more modern term for the same concept) they divide into a network portion and a host portion. The former tells you what network the host is on, the latter uniquely identifies the host on that network. This is exactly the same for both protocols.

Or what do you mean by “parts of an IPv4 address and their meaning”?

That multicast on IPv4 isn’t used as much is irrelevant. It functions the same way in both protocols.


IPv4 uses ARP which is just a half baked multicast. IPv6 is much better designed.

The biggest difference is often overlooked because it's not part of the packet format or anything: IPv4 /32s were not carried over to IPv6. If you owned 1.1.1.1 on ipv4, and you switch to ipv6, you get an entirely different address instead of 1.1.1.1::. Maaybe you get an ipv6-mapped-ipv4 ::ffff:1.1.1.1, but that's temporary and isn't divisible into like 1.1.1.1.2.

And then all the defaults about how basically everything works are different. Home router in v6 mode means no DHCP, no NAT, and hopefully yes firewall. In theory you can make it work a lot like v4, but by default it's not.


multicast has been dead for years

> The current adoption woes are exactly because IPv6 is so different from IPv4. Everyone who tries it out learns the hard way that most of what they know from IPv4 doesn't apply.

In my experience the differences are just an excuse, and however similar you made the protocol to IPv4 the people who wanted an excuse would still manage to find one. Deploying IPv6 is really not hard, you just have to actually try.


Part of the ipv6 ambition was fixing all the suboptimally allocated ipv4 routes. They considered your idea and decided against it for that reason. But had they done it, we would've already been on v6 for years and had plenty of time to build some cleaner routes too.

I think they also wanted to kill NAT and DHCP everywhere, so there's SLAAC by default. But turns out NAT is rather user-friendly in many cases! They even had to bolt on that v6 privacy extension.


What do you mean by suboptimal allocation?

The ipv4 routing table contains many individual /24 subnets that cannot be summarized, causing bloat in the routing tables.

With ipv6, that can be simplified with just a couple of /32 or /48 prefixes per AS.


This, because a bunch of random /24s were sold off to different ISPs, because of address scarcity.

> I disagree. The current adoption woes are exactly because IPv6 is so different from IPv4.

How is IPv6 "so different" than IPv4 when looking at Layer 3 and above?

(Certainly ARP vs ND is different.)


I didn't say it was different 'when looking at layer 3 and above". I said it's different from IPv4. At the IP layer.

At the IP layer just being different is 90% of the trouble. Being less ambitious would have some upsides and downsides but not seriously change that.

> I said it's different from IPv4. At the IP layer.

In what way? Longer addresses? In what way is it "so different" that people are unable to handle whatever differences you are referring to?

We used to have IPv4, NetBEUI, AppleTalk, IPX all in regular use in the past: and that's just on Ethernet (of various flavours), never mind different Layer 2s. Have network folks become so dim over the last few years that they can't handle a different protocol now?


But that is a bug in history. IPv6 was standardized BEFORE NAT.

“most what they know from IPv6” is just NAT.

> A less ambitious IPv4 is exactly what we need in order to make any progress

but we’re already making very good progress with IPv6? Global traffic to Google is >50% IPv6 already.


Current statistics are that a bit over 70% of websites are IPv4 only. A bit under 30% allow IPv6. IPv6 only websites are a rounding error.

Therefore if I'm on an IPv6 phone, odds are very good that my traffic winds up going over IPv4 internet at some point.

We're 30 years into the transition. We are still decades away from it being viable for servers to run IPv6 first. You pretty much have to do IPv4 on a server. IPv6 is an afterthought.


> We are still decades away from it being viable for servers to run IPv6 first.

Just put Cloudflare in front of it. You don’t need to use IPv4 on servers AT ALL. Only on the edge. You can easily run IPv6-only internally. It’s definitely not an afterthought for any new deployments. In fact there’s even a US gov’t mandate to go IPv6-first.

It’s the eyeballs that need IPv4. It’s a complete non-issue for servers.


"Just put Cloudflare in front of it"

Why do I have to get some third party involved??

Listen, you can be assured that the geek in me wants to master IPv6 and run it on my home network and feel clever because I figured it out, but there's another side of me that wants my networking stuff to just work!


If you don’t want to put Cloudflare in front of it, you can dual-stack the edge and run your own NAT46 gateway, while still keeping the internal network v6 only.

You have a point. But you still need DNS to an IPv4 address. And the fact that about 70% of websites are IPv4 only means that if you're setting up a new website, odds are good that you won't do IPv6 in the first pass.

Cloudflare proxy automatically creates A and AAAA records. And you can’t even disable AAAA ones, except in the Enterprise plan. So if you use Cloudflare, your website simply is going to be accessible over both protocols, irrespective of the one you actually choose. Unless you’re on Enterprise and go out of your way to disable it.

Pretty sure NAT was standardized before IPv6.

NAT is RFC 1631.

IPv6 is RFC 1883.

Admitted, that was very basic NAT.


RFC 1631 is a memo, not a standard.

Actually, my bad. NAT was NEVER standardized. Not only NAT was never standardized, it’s never even been on standards track. RFC 3022 is also just “Informational”

Plus, RFC 1918 doesn’t even mention NAT

So yes, NAT is a bug in history that has no right to exist. The people who invented it clearly never stopped to think on whether they should, so here we are 30 years later.


That doesn't really mean much. Basic NAT wasn't eligible to be on the standards track as it isn't a protocol. Same reason firewall RFCs are informational or BCP.

The protocols involving NAT are what end up on the standards track like FTP extensions for NAT (RFC 2428), STUN (RFC 3489), etc.


If only the inventors of NAT had patented it and then refused to license it!

Sort of. I think people would understand

201.20.188.24.6

And most of what they know about how it works clicks in their mind. It just has an extra octet.

I also think hardware would have been upgraded faster.


It would've been even easier and lasted longer to use two bytes of hex at the start. That would've expanded the Internet to 65536x its current space.

Something like aaff:a.b.c.d

Leaving off the prefix: could just mean strictly IPv4.


In IPv6, this is spelled ::ff00:a.b.c.d

It didn’t speed up adoption and people then tried most of the other solutions people are going to suggest for IPv4+. Want the IPv4 address as the network address instead? That’s 2002:a.b.c.d/48 - many ISPs didn’t deploy that either


I think making the extra hex at the end is better, that way its like we are subdividing our existing networks without moving them around

Think of it like phone numbers. For decades people have accepted gradual phone number prefix additions. I remember in rural Ireland my parents got an extra digit in the late 70s, two more in the 90s, and it was conceptually easy. It didn't change how phones work, turn your phone into a party line or introduce letters or special characters into the rotary dial, or allow you to skip consecutive similar digits.

For people who deal with ip addresses, the switch from ipv4 to ipv6 means moving from 4 digits (1.2.3.4) to this:

   2001:0db8:0000:0000:0008:0800:200c:417a
   2001:db8:0:0:8:800:200c:417a
   2001:db8::8:800:200c:417a
Yes, the ipv6 examples are all the same address. This is horrible. Worse than MAC addresses because it doesn't even follow a standard length and has fancy (read: complex) rules for shortening.

Plus switching completely to ipv6 overnight means throwing away all your current knowledge of how to secure your home network. For lazy people, ipv4 NAT "accidentally" provides firewall-like features because none of your home ipv4 addresses are public. People are immediately afraid of ipv6 in the home and now they need to know about firewalls. With ipv4, firewalls were simple enough. "My network starts with 192.168, the Internet doesn't". You need to learn unlearn NAT and port forwarding and realise that with already routable ipv6 addresses you just need a firewall with default deny, and then add rules that "unlock" traffic on specific ports to specific addresses. Of course more complexity gets in the way... devices use "Privacy Extensions" and change their addresses, so making firewall rules work long-term, you should use the device's MAC Address. Christ on a bike.

I totally see why people open this bag of crazy shit and say to themselves "maybe next time I buy a new router I'll do this, but right now I have a home with 4 phones, 3 TVs, 2 consoles, security cameras, and some god damn kitchen appliances that want to talk to home connect or something". Personally, I try to avoid fucking with the network as much as possible to avoid the wrath of my wife (her voice "Why are you breaking shit for ideological reasons? What was broken? What new amazing thing can I do after this?").


> Yes, the ipv6 examples are all the _same address_. This is _horrible_.

Try `ping 16909060` some day :-)


I used it to get around proxies back in the 2000s

What is confusing about that? That's like complaining that you can write an IPv4 address as 001.002.003.004 or 1.2.3.4. Even the :: isn't much different from being able to write 127.0.0.1 as 127.1 (except it now becomes explicit that you've elided the zeroes).

While it's possible to write an ipv4 address in a bunch of different ways (it's just a number, right?) nobody does it because ipv4 standard notation is easy to remember. Ipv6 is not, and none of these attempts to simplify it really work because they change the "format". I understand it and you understand it, but the point here is that it's unfriendly to anyone who isn't familiar with it.

These are all the same address too: 1.2.3.4, 16909060, 0x1020304, 0100401404, 1.131844, 1.0x20304, 1.0401404, 1.2.772, 1.2.0x304, 1.2.01404, 1.2.3.0x4, 1.2.0x3.4, 1.2.0x3.0x4, 1.0x2.772, 1.0x2.0x304, 1.0x2.01404, 1.0x2.3.4, 1.0x2.3.0x4, 1.0x2.0x3.4, 1.0x2.0x3.0x4, 0x1.131844, 0x1.0x20304, 0x1.0401404, 0x1.2.772, 0x1.2.0x304, 0x1.2.01404, 0x1.2.3.4, 0x1.2.3.0x4, 0x1.2.0x3.4, 0x1.2.0x3.0x4, 0x1.0x2.772, 0x1.0x2.0x304, 0x1.0x2.01404, 0x1.0x2.3.4, 0x1.0x2.3.0x4, 0x1.0x2.0x3.4, 0x1.0x2.0x3.0x4

v6 has optional leading zeros and ":: splits the address in two where it appears". v4 has field merging, three different number bases, and it has optional leading zeros too but they turn the field into octal!


"Why are you breaking shit for ideological reasons? What was broken? What new amazing thing can I do after this?"

LOL. Yup. What can I do after this? The answer is basically "nothing really" or "maybe go find some other internet connection that also has IPv6 and directly connect to one of my computers inside the network (which would have been firewalled I'd hope so I'd, what, have to punch open a hole in the firewall so my random internet connection's IPv6 can have access to the box? how does that work? I could have just VPN'd in with the IPv4 world).

Seriously though, how do I "cherry pick hole punch" random hotel internet connections? It's moot anyway because no hotel on earth is dishing out publicly accessable IPv6 addresses to guests....


The main thing is keeping current addresses, not having both an ipv4 and ipv6 address.

Just like for an apartment you append something like 5B. And for a house you don't need that.


It was doomed the moment you had to maintain two separate stacks, each with its own address, firewall rules and so on.

It should have been ipv4 with extra optional bits, so you could have the same rules and everything for both stacks.

I turn it off because it's a risk having one of either stacks malconfigured.

IPv6 should've been a superset of IPv4, as in addresses are shared, not that you have a separate IPv4 and IPv6 address for your server.


That’s why my home network is IPv6 only. NAT64 and DNS64 and 464XLAT work very well, and you only need to configure IPv4 once: in your router, where you need special configuration anyways.

What do you do about IoT devices?

Why would that be a desirable quality? Wifi devices (using Matter or not) live on the same network as my PC - meaning a compromised lightbulb (or one that hasn't been updated) can be used to infiltrate and attack my home computers.

Thread+ Matter, despite using a different radio, suffers from the same issue, since a border router is on the Wifi network, a smart bulb using Thread can theoretically access my PC.

Yes, I'm sure there are ways to fix this, but why have the problem in the first place?

Zigbee is entirely incompatible networking standard, and doesn't have this problem.


for me, I don't need to even setup NAT64. My ISP provides it for me free.

Another day, another Godwin's law of networking.

>It was doomed the moment you had to maintain two separate stacks

Pray, tell me, how are we supposed to extend IPv4 with another {insert a number here} bits without creating a new protocol (that neccessitates running two stacks)?

Suppose that you have an old computer that understands only 32 bit addresses -- good ol' IPv4. Let's name it 192.168.10.10.

It then receives a packet from another computer with hypothetical "IPv4+" support, 172.12.10.98.12.4.24.31... ...Wait a minute, it can't, because your old computer understands only 32 bit addresses!

What if we really forced it to receive the packet anyway? It will see that the packet is from 172.12.10.98, because once again, it understands 32 bit addresses only.

It then sends back the reply to... you guessed it, 172.12.10.98. Not 172.12.10.98.12.4.24.31.

Yeah,172.12.10.98.12.4.24.31 will never get its reply back.

Do you see why any "IPv4 with extra octets" proposal are doomed to begin with now?


It wouldn't be able to receive it. That simple. Which is not a problem, any server would still have an old ipv4 address (172.12.10.98 from your example), like they currently do and probably will for decades.

Devil's advocate. There could be a extension for ipv4 stacks. Ipv4 stacks would need to be modified to include the extension in any reply to a packet received with one. It would also be a dns modification to append the extension if is in the record. Ipv6 stacks would either internally reconstruct the packet as if it were ipv6.

It would be easy to make such an extension, but you're going to hit the same problem v6 did: no v4 stacks use your extension.

How will you fix that? By gradually reinventing v6, one constraint at a time. You're trying to extend v4, so you can't avoid hitting all of the same limits v6 did when it tried to do the same thing. In the end you'll produce something that's exactly as hard to deploy as 6to4 is, but v6 already did 6to4 so you achieved nothing.


Having just optional field in the ipv4 header with extra address bits would leave all the stack source code with just some 100 lines of extra code. Would mean, you can have one stack that handles just both. Make special addresses where the additional bits are all 0, which means the field is not there at all. These addresses could reach ipv4 only addresses and could be reached from them. When you really want to make sure these devices aren't parsing ipv4+ packets, change the checksum-code for all packages that contain the optional field. That would mean all ipv4 only devices would ignore ipv4+ packages. Instead you could change the version to 5 for all with optional address bits.

This is stuff that could be implemented in any ipv4 stack in some days of work.

IPv6 is overengineered, thats the reason why it's not adopted after 30 years.


You clearly do not understand networking. Or else you won't make such a statement:

>This is stuff that could be implemented in any ipv4 stack in some days of work.

The sysadmins across the world, who had to deal with decades-old, never-updated devices facepalmed in unison.

At least the other comment agreed that "IPv4+" hosts will never be able to talk to IPv4 hosts.

>IPv6 is overengineered, thats the reason why it's not adopted after 30 years.

It is already adopted in many countries. Don't blame the protocol for your countrymen's incompetence.


And 2 listeners

How much energy did evolution "spend" to get us here?

I agree human brains are crazy efficient though.


If you make it more efficient, then you train it for longer or make it larger. You're not going to just idle your GPUs.

And yes of course it's a race, everything being equal nobody's going to use your model if someone else has a better model.


Simulations in general are pretty flawed, and AIs will usually find ways to "cheat" the simulation.

It's a very useful tool of course, but not as good as the software situation.


Movies are mastered for a dark room. It's not going to look good with accurate settings if you are in a lit room.

Having said that, there are a lot of bad HDR masters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: