Hacker Newsnew | past | comments | ask | show | jobs | submit | rat9988's commentslogin

What is the difference between having the militaire have boots in the ground to kill you if you don't comply, and having military in the border next to you to kill you, bomb you or kidnap you if you don't comply?

One involves an invasion having already occured and the other doesn't. Making this distinction is literally the subject of the entire thread.

USA is not meaningfully in control of Venezuela. They have no more control then a week ago.

Venezuela regime did not changed either. The same generals and politicials remain on power, altrough there is bound to be some power struggle between them.


> Well, the first 90% is easy, the hard part is the second 90%.

You'd need to prove that this assertion applies here. I understand that you can't deduce the future gains rate from the past, but you also can't state this as universal truth.


No, I don't need to. Self driving cars is the most recent and biggest example sans LLMs. The saying I have quoted (which has different forms) is valid for programming, construction and even cooking. So it's a simple, well understood baseline.

Knowledge engineering has a notion called "covered/invisible knowledge" which points to the small things we do unknowingly but changes the whole outcome. None of the models (even AI in general) can capture this. We can say it's the essence of being human or the tribal knowledge which makes experienced worker who they are or makes mom's rice taste that good.

Considering these are highly individualized and unique behaviors, a model based on averaging everything can't capture this essence easily if it can ever without extensive fine-tuning for/with that particular person.


>> No, I don't need to. Self driving cars is the most recent and biggest example sans LLMs.

Self-driving cars don't use LLMs, so I don't know how any rational analysis can claim that the analogy is valid.

>> The saying I have quoted (which has different forms) is valid for programming, construction and even cooking. So it's a simple, well understood baseline.

Sure, but the question is not "how long does it take for LLMs to get to 100%". The question is, how long does it take for them to become as good as, or better than, humans. And that threshold happens way before 100%.


>> Self-driving cars don't use LLMs, so I don't know how any rational analysis can claim that the analogy is valid.

Doesn't matter, because if we're talking about AI models, no (type of) model reaches 100% linearly, or 100% ever. For example, recognition models run with probabilities. Like Tesla's Autopilot (TM), which loves to hit rolled-over vehicles because it has not seen enough vehicle underbodies to classify it.

Same for scientific classification models. They emit probabilities, not certain results.

>> Sure, but the question is not "how long does it take for LLMs to get to 100%"

I never claimed that a model needs to reach a proverbial 100%.

>> The question is, how long does it take for them to become as good as, or better than, humans.

They can be better than humans for certain tasks. They are actually better than humans in some tasks since 70s, but we like to disregard them to romanticize current improvements, but I don't believe current or any generation of AIs can be better than humans in anything and everything, at once.

Remember: No machine can construct something more complex than itself.

>> And that threshold happens way before 100%.

Yes, and I consider that "treshold" as "complete", if they can ever reach it for certain tasks, not "any" task.


Self driving cars is not a proof. It only proves that having quick gains doesn't mean necessarily you'll get a 100% fast. It doesn't prove it will necessarily happen.

"covered/invisible knowledge" aka tacit knowledge

Yeah, I failed to remember the term while writing the comment. Thanks!

>None of the models (even AI in general) can capture this

None of the current models maybe, but not AI in general? There’s nothing magical about brains. In fact, they’re pretty shit in many ways.


A model trained on a very large corpus can't, because these behaviors are different or specialized enough they cancel each other most of the cases. You can forcefully fine-tune a model with a singular person's behavior up to a certain point, but I'm not sure that even that can capture the subtlest of behaviors or decision mechanisms which are generally the most important ones (the ones we call gut feeling or instinct).

OTOH, while I won't call human brain perfect, the things we label "shit" generally turn out to be very clever and useful optimizations to workaround its own limitations, so I regard human brain higher than most AI proponents do. Also we shouldn't forget that we don't know much about how that thing works. We only guess and try to model it.

Lastly, searching perfection in numbers and charts or in engineering sense is misunderstanding nature and doing a great disservice to it, but this is a subject for another day.


The understanding of the brain is far from complete whether they're "magical" or "shit."

Also obviously brains are both!

I read the comment more as "based on past experience, it is usually the case that the first 90% is easier than the last 10%", which is the right base case expectation, I think. That doesn't mean it will definitely play out that way, but you don't have to "prove" things like this. You can just say that they tend to be true, so it's a good expectation to think it will probably be true again.

The saying is more or less treated as a truism at this point. OP isn't claiming something original and the onus of proving it isn't on them imo.

I've heard this same thing repeated dozens of times, and for different domains/industries.

It's really just a variation of the 80/20 rule.


They probably don't need it. You can start a crowdfunding campaign if you do.


I'm pretty the armies of accountants would have rated it higher if the cashflow was positive than negative. Negative can't be good even while accounting for taxes.


>Yes, it’s there, but very much in the fineprint.

This is where it belongs, at best. He doesn't even have to disclose it. Prompting so that the ai writes the code faster than you is okay.


Yes, now generalize the theorem to any human to make it usable on a daily basis.


The error rate would still be improved overall and might make it a viable tool for the price depending on the usecase.


Well, it is the original poster that used vibe-coding as "AI-assisted engineering".


> I didn't see much concrete evidence this was noticeably better than 5.1

Did you test it?


No, I would like to but I don't see it in my paid ChatGPT plan or in the API yet. I based my comment solely off of what I read in the linked announcement.


Well, the case would still stand, wouldn't it? Unless C is free of these dozen common issues.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: