A flawless predictor would indicate you’re in a simulation, but also we cannot even simulate multiple cells at the most fine-grained level of physics.
But also you’re right that even a pretty good (but not perfect) predictor doesn’t change the scenario.
What I find interesting is to change the amounts. If the open box has $0.01 instead of $1000, you’re not thinking ”at least I got something”, and you just one-box.
But if both boxes contain equal amounts, or you swap the amounts in each box, two-boxing is always better.
All that to say, the idea that the right strategy here is to ”be the kind of person who one-boxes” isn’t a universe virtue. If the amounts change, the virtues change.
A flawless predictor would indicate you’re in a simulation [...]
No, it does not. Replace the human with a computer entering the room, the predictor analyzes the computer and the software running on the computer when it enters. If the decision program does not query a hardware random source or some stray cosmic particle changes the choice, the predictor could perfectly predict the choice just by accurately enough emulating the computer. If the program makes any use of external inputs, say the image from an attached webcam, the predictor also needs to know those inputs well enough. The same could, at least in principle, work for humans.
I agree with you that it doesn't require that you are in a simulation, but a flawless predictor would be a strong indication that a simulation is possible, and that should raise our assumed probability that we're in a simulation.
I would think that the existence of a flawless predictor is probably more likely to indicate that memories of predictions, and any associated records, have been modified to make the predictor appear flawless.
That’s a great question and a very realistic thing for us to answer. There is definitely no increase in AI here. If you’d like, I can walk you through how the best posters arrive at this conclusion in the normal human way. Just say the word.
Greg Kroah-Hartman was once asked by his boss, ”when will Linux be done?” and he said, ”when people stop making new hardware”, that even today, when we assume the hardware won’t lie, much of the work in maintaining Linux is around hardware bugs.
So even at the lowest levels of software development, you can’t know the bugs you’re going to have until you partially solve the problem and find out that this combination of hardware and drivers produces an error, and you only find that out because someone with that combination tried it. There is no way to prevent that by “make better spec”.
But that’s always been true. Basically it’s the 3-body-problem. On the spectrum of simple-complicated-complex, you can calculate the future state of a system if it’s simple, or “only complicated” (sometimes), but you literally cannot know the future state of complex systems without simulating them, running each step and finding out.
And it gets worse. Software ranges from simple to complicated to complex. But it exists within a complex hardware environment, and also within a complex business environment where people change and interest rates change and motives change from month to month.
The idea that people are going to YOLO changes to DNS and Postgres migrations gives me such anxiety, knowing the pain people are in for when they “point Claude at it, one prompt, and done”, then their business is dead in the water for a week or two while every executive is trying to micromanage the recovery.
I love Streamlit and mermaid, but if these are the shining examples this isn’t a good sign. These have hard ceilings and there’s only so much you can work around the model of “rerun the entire Python script every time the page changes”.
As long as humans are involved the UI will matter. Maybe the importance is not on the end-user facing UI, and maybe it’s more on the backend SRE-level observability UI that gives insight into whether the wheels are still on the bus, but it will matter.
Some people are getting the AI to handle that too, and like all demos, that will work until it doesn’t. I’m sure NASA or someone can engineer it well enough to work, but it’s always going to be a tradeoff: how fast you can go depends more on the magnitude of the crash you can survive, than the top speed someone achieves once and lives to tell about it.
> it allows our margins to be higher and our speed of implementation to be faster
Faster than what? You will be faster than your previous self, just like all of your competitors. Where’s the net gain here? Even if you somehow managed to capture more value for yourself, you’ve stopped providing value to 5-10x that many employees who are no longer employed.
When costs approach zero on a large scale, margins do not increase. Low costs = you’re not paying anyone = your competitors aren’t paying anyone = your customers no longer have money = your revenue follows your costs straight to zero.
Companies that provide physical services can’t scale without hiring. A one-man “crew” isn’t putting a roof on a data center.
I want to be wrong. Tell me why you think any of this is wrong.
I don't think you are wrong. I find many tech people/founders excited by AI don't understand end game economics in general. Like kids excited by the new toy starting their new startup they don't see the end game if this all plays out; or they are hopeful that they are the lucky ones.
Generally industries once they become a cheap commodity are at best cost based pricing. If you aren't charging to cost I will go to where it is; especially in a saturated market.
Ironically large corp, instead of tech companies, is probably where the SWE jobs of the future are at. Cost based pricing in cost based centre's. Creating own software with domain knowledge; rather than generic SaaS. Shared platforms will probably still have some value; but the value there isn't from the effort in code - more things like network effects, physical control, regulation, etc. Not an industry to get into anymore IMO -> AI is destroying SWE.
Software was always a means to an end; albeit an expensive way to get there that often paid off anyway at scale. The means is getting cheaper; the end remains.
In my editor this looks like this, with an extension like Tailwind Fold or Inline Fold:
<div class="...">
<p class="...">
Because Tailwind is so low-level, it never encourages you to design the same site twice. Some of your favorite sites are built with Tailwind, and you probably had no idea.
</p>
</div>
FWIW, “colocation in component-based architecture” doesn’t necessarily mean shared code. It can just mean the one thing has all of its parts in one place, instead of having HTML in one file, CSS in another, JS in another.
You’re right about DRY and code reuse very often being a premiere (wrong) abstraction, which is usually more of a problem than a few copy/pastes, because premature wrong abstractions become entrenched and harder to displace.
Cognitive load of looking at 12 open files trying to understand what’s happening. Well, in fairness some of those 12 are the same file because we have one part for the default CSS and then one for the media query that’s 900 lines further down the file.
If you have a complaint about your styles being so complicated and in a giant 900 line mega file, I don’t see how you address physical size other than breaking up the file.
Granted, nesting support was also added fairly recently in the grand scheme of things, which boggles the mind given how it was such an obvious problem and solution that CSS preprocessing came about to address it.
> Can you provide examples in the wild of LLMs creating bad descriptions of code? Has it ever happened to you?
Yes. Docs it produces are generally very generic, like it could be the docs for anything, with project-specifics sprinkled in, and pieces that are definitely incorrect about how the code works.
> for some stuff we have to trust LLMs to be correct 99% of the time
But also you’re right that even a pretty good (but not perfect) predictor doesn’t change the scenario.
What I find interesting is to change the amounts. If the open box has $0.01 instead of $1000, you’re not thinking ”at least I got something”, and you just one-box.
But if both boxes contain equal amounts, or you swap the amounts in each box, two-boxing is always better.
All that to say, the idea that the right strategy here is to ”be the kind of person who one-boxes” isn’t a universe virtue. If the amounts change, the virtues change.
reply