Hacker Newsnew | past | comments | ask | show | jobs | submit | ecocentrik's commentslogin

Doesn't this run into the same bottleneck as developing AI first languages? AI need tons of training material for how to write good formal verification code or code in new AI first languages that doesn't exist. The only solution is large scale synthetic generation which is hard to do if humans, on some level, can't verify that the synthetic data is any good.


In the US they also get scanned and stored.


My advice: There's always at least one crypto scammer telling you to hold through the dip.


I hear there’s always money in the banana stand.


Given the choice between a 2000 acre banana plantation and 400 bitcoin. I would choose the banana plantation with full confidence that I would get a better return from bananas over the next 20 years.


What can it cost, $5?


My advice... Take a time machine back to 2009-2012 & only invest %100.

Otherwise it's too late.


I agree. Agentic use isn't always necessary. Most of the time it makes more sense to treat LLMs like a dumb, unauthenticated human user.


Mississippi? I bet it's a flyover state with a tiny sliver of road that sees massive trucking volume.


It's gonna be California (but I'm guessing, not sure). Other states just defer to federal regulation.

That they don't put the state on blast sort of points to the big cost not being entirely real (where they either think they can induce regulatory change or the number of tests that is needed to sell the systems is quite a lot less than the number of tests that would be needed to allow 100% of the market to use their system).


mississippi doesn't make people do certifications lol. unless you drive a hybrid, then you pay the hybrid tax.


Eh. Discovering how neurons can be coaxed into memorizing things with almost perfect recall was cool but real AGI or even ASI shouldn't require the sum total of all human generated data to train.


There was a period of time where Wikipedia was more scrutinized than print encyclopedias because people did not understand the power of having 1000s of experts and the occasional non-experts editing an entry for free instead of underpaying one sudo-expert. They couldn't comprehend how an open source encyclopedia would even work or trust that humans could effectively collaborate on the task. They imagined that 1000s of self-interested chaos monkeys would spend all of their energy destroying what 2-3 hard working people has spent hours creating instead of the inverse. Humans are very pessimistic about other humans. In my experience when humans are given the choice to cooperate or fight, most choose to cooperate.

All of that said, I trust Wikipedia more than I trust any LLMs but don't rely on either as a final source for understanding complex topics.


> the power of having 1000s of experts and the occasional non-experts editing an entry

When Wikipedia was founded, it was much easier to change articles without notice. There may not have been 1000s of experts at the time, like there are today. There's also other things that Wikipedia does to ensure articles are accurate today that they may not have done or been able to do decades ago.

I am not making a judgment of Wikipedia, I use it quite a bit, I am just stating that it wasn't trusted when it first came out specifically because it could be changed by anyone. No one understood it then, but today I think people understand that it's probably as trustworthy or moreso than a traditional encyclopedia is/was.


> In my experience when humans are given the choice to cooperate or fight, most choose to cooperate.

Personally, my opinion of human nature falls somewhere in the middle of those two extremes.

I think when humans are given the choice to cooperate or fight, most choose to order a pizza.

A content creator I used to follow was fond of saying "Chill out, America isn't headed towards another civil war. We're way too fat and lazy for that."


Even ordering a pizza requires the cooperation of a functioning telecom system, a pizza manufacturer, a delivery person, a hungry customer...


Sure but I hope you get my point. Fighting takes effort, cooperation takes effort. Most people have other things to worry about and don't care about whatever it is you're fighting or cooperating over. People aren't motivated enough to try and sabotage the wikipedia articles of others. Even if they could automate it. There's just nothing in it for them.


The opposite of love and hatered is apathy.


For better or worse, it's also what makes for reliable systems.


> "They imagined that 1000s of self-interested chaos monkeys would spend all of their energy destroying what 2-3 hard working people has spent hours creating instead of the inverse."

Isn't that exactly what happens on any controversial Wikipedia page?


There's not that many controversial topics at any given time. One of Wikipedia's solutions was to lock pages until a controversy subsided. Perma-controversy has been managed in other ways, like avoiding the statement of opinion as fact, the use of clear and uncontroversial language, using discussion pages to hash out acceptable and unacceptable content, competent moderators... Rage burns itself and people get bored with vandalism.


It doesn't always work. There are many topics that are perpetual edit wars because both (multiple) sides see the proliferation of their perspective as a matter of life and death. In many cases, one side is correct in this assessment and the others are delusional, but it's not always easy to align the side that's correct with the people who effectively control the page, because editors indeed do have their own biases (whether because of ideology, a philosophy, a political party, a nation, or whatever else). For those topics, Wikipedia can never be a source of "truth".


What's the obsession with Burry?


When 99.999% of investors lose money and one person hits it big, people are naturally going to be drawn to that person. Even moreso when a really fun movie is made in which they play a big role.



Christian Bale played him in a Michael Lewis movie.


By "real people" do you mean people who are not members of those minority groups? Or are people who can "accurately classify the facial expression of images from minority groups" not "real people"?

I hope you can see the problem with your very lazy argument.


AI are not real people. Obviously. Just look at the first line to see the intended line of argument.

It's not about which people per se, but how many, in aggregate.


LLMs are close enough to pass the Turing Test. That was a huge milestone. They are capable of abstract reasoning and can perform many tasks very well but they aren't AGI. They can't teach themselves to play chess at the level of a dedicated chess engine or fly an airplane using the same model they use to copypasta a React UI. They can only fool non-proficient humans into believing that they might be capable of doing those things.


Turing Test was a thought experiment not a real benchmark for intelligence. If you read the paper the idea originated from it is largely philosophical.

As for abstract reasoning, if you look at ARC-2 it is barely capable though at least some progress has been made with the ARC-1 benchmark.


I wasn't claiming the Turing Test was a benchmark for intelligence but the ability to fool a human into thinking a machine is intelligent in conversation is still a significant milestone. I should have said "some abstract reasoning". ARC-2 looks promising.


>I wasn't claiming the Turing Test was a benchmark for intelligence but the ability to fool a human into thinking a machine is intelligent in conversation is still a significant milestone.

The Turing Test is whether it can fool a human into thinking it is talking to another human not an intelligent machine. And ironically this is becoming less true over time as people become more used to spotting the tendencies LLMs have with writing such as its frequent use of dashes or "it's not just X it is Y" type of statements.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: