Hacker Newsnew | past | comments | ask | show | jobs | submit | uncomputation's commentslogin

> what are billions of people doing on Facebook if it's harmful? I don't know.

This is extremely naive and akin to asking "why do people drink if it’s bad for you then??" Popular != healthy


Tbh, it's not clear that if we all stopped drinking the world would be better.


Yeah, and maybe it would be better still if we all tried heroin!


Are you legitimately trying to compare alcohol to heroin?


I don’t want to spoil anything for you, but ethanol is actually a very reactive molecule — and in some ways, it acts similarly to opioids like heroin. It, among other things, stimulates endogenous opioid pathways, leading to the release of β-endorphins and activation of mu-opioid receptors. So, alcohol works indirectly, heroin directly – but both enhance opioid signalling. If you’re curious, this study explains it really well:

https://pmc.ncbi.nlm.nih.gov/articles/PMC3728478/


Food can also indirectly enhance opioid signaling. What's your point?


Food activates it within normal biological limits, alcohol and heroin artificially push the same system far beyond normal range, forcing the brain to compensate by downregulating receptors or reducing endogenous opioid production, so it's totally legit to compare alcohol to heroin


I think they are legitimately mocking the notion that we wouldn't all be better off without alcohol.


Oh it definitely would


Hard disagree.


I think it's incredibly naive and arrogant to tell billions of people who use a product through their own free will "ackchyually its really bad and you should stop".

Almost everyone would give you a response similar to mine. They use it because its easy way to plan events since so many people are on it, or a small business can easily create a website, sell something or just kill some time on the can.

Give it a rest.


Would you say the same about cigarettes? Billions of people used to smoke it as well, it was (and still is) quite popular, is it arrogant and patronising to tell them: this is unhealthy for you, and affects society in harmful ways?


It's the television and the Internet. It's that simple.


There’s a more generalizable work on this recently for those expecting more. https://github.com/leochlon/hallbayes


A lot of pearl clutching over extremely average marketing material.


“Bans ByteDance” might be better wording.


Wow, this might be one of the worst PR decisions in recent history.


Certainly it is the equivalent of a kid yelling "I can play my music as loud as I want to!"

As I understand proposed legislation, it would apply to many websites and not be directed at TikTok or China specifically. I wonder if there is a larger strategic interest for China if the US enacts this type of law? Maybe the blowback is entirely expected and is the actual desired response?


I was mistaken about the currently proposed legislation[0], which mentions ByteDance specifically and is broader to include any app from a "foreign adversary country" defined elsewere.

It shall be unlawful for an entity to distribute, . . . a foreign adversary controlled application by . . .: [app store] or [internet hosting]

[...]

FOREIGN ADVERSARY COUNTRY.—The term “foreign adversary country” means a country specified in section 4872(d)(2) of title 10, United States Code.

USC Title 10 section 4872(d)(2) defines the adversaries as N. Korea, China, Russia and Iran. [1]

0. https://www.congress.gov/bill/118th-congress/house-bill/7521...

1. https://www.law.cornell.edu/uscode/text/10/4872


Except it's a lie. The modal did show up for some people, but it was closable via an 'X'.


> Except it's a lie. The modal did show up for some people, but it was closable via an 'X'.

Or was it a dark pattern?

The calling and hanging up aspect seems to indicate a lot of people didn't understand how to close it without calling.




> they don't refute that they did betray it

They do. They say:

> Elon understood the mission did not imply open-sourcing AGI. As Ilya told Elon: “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”

Whether you agree with this is a different matter but they do state that they did not betray their mission in their eyes.


The benefit is the science, nothing else matters, and having OpenAI decide what matters for everyone is repugnant.

Of course they can give us nothing, but in that case they should start paying taxes and stop claiming they're a public benefit org.

My prediction is they'll produce little of value going forward. They're too distracted by their wet dreams about all the cash they're going to make to focus on the job at hand.


I agree with your sentiment but the prediction is very silly. Basically every time openai releases something they beat the state of the art in that area by a large margin.


We have a saying:

There is always someone smarter than you.

There is always someone stronger than you.

There is always someone richer than you.

There is always someon X than Y.

This is applicable to anything, just because OpenAI has a lead now it doesn't mean they will stay X for long rather than Y.


> The benefit is the science, nothing else matters, and having OpenAI decide what matters for everyone is repugnant.

OpenAI gets to decide what it does with its intellectual property for the same reason that a whole bunch of people are suing it for using their intellectual property.

It only becomes repugnant to me if they're forcing their morals onto me, which they aren't, because (1) there are other roughly-equal-performance LLMs that aren't from OpenAI, and (2) the stuff it refuses do is a combination of stuff I don't want to exist and stuff I have a surfeit of anyway.

A side effect of (1) is that humanity will get the lowest common (moral and legal) denominator in content from GenAI from different providers, just like the prior experience of us all getting the lowest common (moral and legal) denominator in all types of media content due to internet access connecting us to other people all over the world.


> The benefit is the science, nothing else matters

Even if that science helps not so friendly countries like Russia?


OpenAI at this point must be literally #1 target for every single big spying agency in whole world.

As we saw previously it doesn't matter much if you are top notch ai researcher, if 1-2 millions of your potential personal wealth are in stake this affect decision making (and probably would mine too).

How much of a bribe would it take for anybody inside with good enough access to switch sides and take all the golden eggs out? 100 million? A billion? Trivial amounts compared to what we discuss. And they will race each other to your open arms for such amounts.

We see sometimes recently ie government officials betraying their own countries to russian spies in Europe for few hundred - few thousands of euros. A lot of people are in some way selfish by nature, or can be manipulated easily via emotions. Secret services across the board are experts in that, it just works(tm).

To sum it up - I don't think it can be protected long term.


I'm a very weird person with money. I've basically got enough already, even though there are people on this forum who earn more per year than I have in total. My average expenditure is less than €1k/month.

This means I have no idea how to even think about people who could be bribed when they already earn a million a year.

But also, if AI can be developed as far as the dreamers currently making it real hope it can be developed, money becomes as useless to all of us as previous markers of wealth like "a private granary" or "a lawn" or "aluminium cutlery"[0].

[0] https://history.stackexchange.com/questions/51115/did-napole...


Wouldn't you accept a bribe if it's proposed as "an offer you can't refuse"?


Governments WILL use this. There really isn't any real way to keep their hands off technology like this. Same with big corporations.

It's the regular people that will be left out.


> Even if that science helps not so friendly countries like Russia?

Nothing will stop this wave, and the United States will not allow itself to be on the sidelines.


They are totally closed now, not just keeping their models for themselves for profit purposes. They also don't disclose how their new models work at all.

They really need to change their name and another entity that actually works for open AI should be set up.


Their name is as brilliant as

“The Democratic People's Republic of Korea”

(AKA North Korea)


> everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...

everyone... except scientists and the scientific community.


Well, the Manhattan project springs to mind. They truly thought they were laboring for the public good, and even if the government let them wouldn’t have wanted to publish their progress.

Personally I find the comparison of this whole saga (deepmind -> google —> openai —> anthropic —-> mistral —-> ?) to the Manhattan project very enlightening, both of this project and our society. Instead of a centralized government project, we have a loosely organized mad dash of global multinationals for research talent, all of which claim the exact same “they’ll do it first!” motivations as always. And of course it’s accompanied by all sorts of media rhetoric and posturing through memes, 60-Minutes interviews, and (apparently) gossipy slap back blog posts.

In this scenario, Oppenheimer is clearly Hinton, who’s deep into his act III. That would mean that the real Manhattan project of AI took place in roughly 2018-2022 rather than now, which I think also makes sense; ChatGPT was the surprise breakthrough (A-bomb), and now they’re just polishing that into the more effective fully-realized forms of the technology (H-bomb, ICBMs).


> They truly thought they were laboring for the public good

Nah. They knew they were working for their side against the other guys, and were honest about that.


The comparison is dumb. It wasn’t called the “open atomic bomb project”


Exactly. And the OpenAI actually called it "open atomic bomb project".


They literally created weapons of mass destruction.

Do you think they thought they were good guys because you watched a Hollywood movie?


Hmm do you have some sources? That sounds interesting. Obviously there’s always doubt, but yeah I was under the impression everyone at the Manhattan project truly believed that the Axis powers were objectively evil, so any action is justified. Obviously that sorta thinking falls apart on deeper analysis, but it’s very common during full war, no?

EDIT: tried to take the onus off you, but as usual history is more complicated than I expected. Clearly I know nothing because I had no idea of the scope:

  At its peak, it employed over 125,000 direct staff members, and probably a larger number of additional people were involved through the subcontracted labor that fed raw resources into the project. Because of the high rate of labor turnover on the project, some 500,000 Americans worked on some aspect of the sprawling Manhattan Project, almost 1% of the entire US civilian labor force during World War II.
Sooo unless you choose an arbitrary group of scientists, it seems hard. I haven’t seen Oppenheimer but I understand it carries on the narrative that he “focused on the science” until the end of the war when his conscience took over. I’ll mostly look into that…


If you really think you're fighting evil in a war for global domination, it's easy to justify to yourself that it's important you have the weapons before they do. Even if you don't think you're fighting evil; you'd still want to develop the weapons before your enemies so it won't be used against you and threaten your way of life.

I'm not taking a stance here, but it's easy to see why many Americans believed developing the atomic bomb was a net positive at least for Americans, and depending on how you interpret it even the world.


The war against Germany was over before the bomb was finished. And it was clear long before then that Germany was not building a bomb.

The scientists who continued after that (not all did) must have had some other motivation at that point.


I kind of understand that motivation, it is a once in a lifetime project, you are part of it, you want to finish it.

Morals are hard in real life, and sometimes really fuzzy.


In this note: HIGHLY recommend “Rigor of Angels”, which (in part) details Heisenbergs life and his moral qualms about building a bomb. He just wanted to be left alone and perfect his science, and it’s really interesting to see how such a laudable motivation can be turned to such deplorable, unforgivable (IMO) ends.

Long story short they claim they thought the bomb was impossible, but it was still a large matter of concern for him as he worked on nuclear power. The most interesting tidbit was that Heisenberg was in a small way responsible for (west) Germany’s ongoing ban on nuclear weapons, which is a slight redemption arc.


Heisenberg makes you think, doesn't he? As the developer of Hitler's bomb, which never was a realistic thing to begin with, he never employed slave labour for example. Nor was any of his stuff used during warfare. And still, he is seen by some as some tragic figure, at worst as man behind Hitler's bomb.

Wernher vin Braun on the other hand got lauded for his contribution to space exploration. His development of the V2 and his use of slave labour in building them was somehow just a minor disgression for the, ultimately under US leadership, greater good.


To be reductionist - history is written by the victors.

https://www.smbc-comics.com/comic/status-2


Charitably I think most would see it as an appropriate if unexpected metaphor.


I think they thought it would be far better that America developed the bomb than Nazis Germany, and the Allies needed to do whatever it too to stop Hitler, even if that did mean using nuclear bombs.

Japan and the Soviet Union were more complicated issues for some of the scientists. But that's what happens with warfare. You develop new weapons, and they aren't just used for one enemy.


What did Lehrer (?) sing about von Braun? "I make rockets go up, where they come down is not my department".


Don't say that he's hypocritical,

Say rather that he's apolitical.

"Once the rockets are up, who cares where they come down?

That's not my department, " says Wernher von Braun.


That's the one, thank you!


So.. "open" means "open at first, then not so much or not at all as we get closer to achieving AGI"?

As they become more successful, they (obviously) have a lot of motivation to not be "open" at all, and that's without even considering the so-called ethical arguments.

More generally, putting "open" in any name frequently ends up as a cheap marketing gimmick. If you end up going nowhere it doesn't matter, and if you're wildly successful (ahem) then it also won't matter whether or not you're de facto 'open' because success.

Maybe someone should start a betting pool on when (not if) they'll change their name.


OpenAI is literally not a word in the dictionary.

It’s a made up word.

So the Open in OpenAI means whatever OpenAI wants it to mean.

It’s a trademarked word.

The fact that Elon is suing them for their name when the guy has a feature “AutoPilot” which is not a made up word and had an actual well understood meaning which totally does not apply to how Tesla uses AutoPilot is hilarious.


Actually Open[Technology] pattern implies a meaning in this context. OpenGL, OpenCV, OpenCL etc. are all 'open' implementations of a core technology, maintained by non-profit organizations. So OpenAI non-profit immediately implies a non-profit for researching, building and sharing 'open' AI technologies. Their earlier communication and releases supported that idea.

Apparently, their internal definition was different from the very beginning (2016). The only problem with their (Ilya's) definition of 'open' is that it is not very open. "Everyone should benefit from the fruits of AI". How is this different than the mission of any other commercial AI lab? If OpenAI makes the science closed but only their products open, then 'open' is just a term they use to define their target market.

A better definition of OpenAi's 'open' is that they are not a secret research lab. They act as a secret research lab, but out in the open.


> An autopilot is a system used to control the path of an aircraft, marine craft or spacecraft without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allowing the operator to focus on broader aspects of operations (for example, monitoring the trajectory, weather and on-board systems). https://en.wikipedia.org/wiki/Autopilot

Other than the vehicle, this would seem to apply to Tesla's autopilot as well. The "Full Self Driving" claim is the absurd one, odd that you didn't choose that example.


OpenAI by Microsoft?


Ilya may have said this to Elon but the public messaging of OpenAI certainly did not paint that picture.

I happen to think that open sourcing frontier models is a bad idea but OpenAI put themselves in the position where people thought they stood for one thing and then did something quite different. Even if you think such a move is ultimately justified, people are not usually going to trust organizations that are willing to strategically mislead.


What they said there isn't their mission, that is their hidden agenda. Here is their real mission that they launched with, they completely betrayed this:

> As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world

https://openai.com/blog/introducing-openai


“Dont be evil” ring any bells?


Google is a for-profit, they never took donations with the goal of helping humanity.


They started as a defence contractor with generous “donation” from DARPA. That’s why i never trusted them from day 0. And they have followed a pretty predictable trajectory.


"Don't be evil" was codified into the S-1 document Google submitted to the SEC as part of their IPO:

https://www.sec.gov/Archives/edgar/data/1288776/000119312504...

""" DON’T BE EVIL

Don’t be evil. We believe strongly that in the long term, we will be better served—as shareholders and in all other ways—by a company that does good things for the world even if we forgo some short term gains. This is an important aspect of our culture and is broadly shared within the company.

Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see. """


Yes, there they explain why doing evil will hurt their profits. But a for profits main mission is always money, the mission statement just explains how they make money. That is very different from a non-profit whose whole existence has to be described in such a statement, since they aren't about profits.


Nothing in an S-1 is "codified" for an organization. Something in the corporate bylaws is a different story.


This claim is nonsense, as any visit to the Wayback Machine can attest.

In 2016, OpenAI's website said this right up front:

> We're hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We'll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.

I don't know how this quote can possibly be squared with a claim that they "did not imply open-sourcing AGI".


In that case they mean that their mission to ensure everyone benefits from AI has changed to be that only a few would benefit. But it would support them saying like "it was never about open data"

In a way this could be more closed than for profit.


> but it's totally OK to not share the science...

That passes for an explanation to you ? What exactly is the difference between openai and any company with a product then ? Hey, we made THIS and in order to make sure everyone can benefit we sell at a price of X.


The serfs benefitted from the use of the landlord's tools.

This would mean it is fundamentally just a business with extra steps. At the very least, the "foundation" should be paying tax then.


So, open as in "we'll sell to anyone" except that at first they didn't want to sell to the military and they still don't sell to people deemed "terrorists." Riiiiiight. Pure bullshit.

Open could mean the science, the code/ip (which includes the science) or pure marketing drivel. Sadly it seems that it's the latter.


“The Open in openAI means that [insert generic mission statement that applies to every business on the planet].”


Can the title be updated to include the “Meet”? Otherwise it’s a bit ominous…


The reporting on this study is conflating sentiment with plot structure, which misrepresents the study.

See for example, Frankenstein. The sentiment rises slightly during the Creature’s narration to Victor of his circumstances - likely the narration of the French family he was “living”/stowing away with - but that’s certainly not a “rise” in the sense Oedipus rises to noble status. It’s hard to interpret Frankenstein as anything other than the protagonist’s consistent and tragic downfall (riches to rags in this analysis).

Not sure if that’s fundamentally a problem trying to extrapolate plot beats from sentiment alone, or a bit of less than accurate journalism.


TLDR:

Word standardizes text through:

- Document templates

- English as lingua franca

- Auto correct and completion

Whether or not you agree (I personally do not find it convincing) it up to you, but there’s a summary because the article is very long-winded.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: