Hacker Newsnew | past | comments | ask | show | jobs | submit | GardenLetter27's commentslogin

And all the oil companies - CRC and Chevron are huge.

AWS Bedrock supports prompt caching, just note that if you use the Converse API you need to set the cache points manually.

The price of everything will go down. That is the beauty of the free market.

If the price of everything would go down it wouldn't be too concerning and everybody would be on board with the "beauty" of it.

What seems to actually be happening for white collar workers is that the price they can charge for their labor is dropping, but the price of their expenses (housing, food, gas) continues to rise.


In the absolutely free market price will go up a lot in the end. Because only one monopoly will exist by that time and it will jack up prices to the maximum tolerable level. And that level can be surprisingly high, because in every human activity there will be few willing to spend crazy amounts of money for practically anything they perceive valuable.

This kind of argument relies on odd definitions of "truly free" that boil down to anarchism, which isn't what anyone who advocates for a free market means.

So what does free market mean then?

To me at least, it means a market in which the basic rules of commerce are enforced but beyond that the government doesn't micromanage. For example, contracts are enforced, there's some basic truth in advertising laws, there's a trustworthy currency available, and all the other basics of civilization like "your competitor isn't allowed to murder you".

It's obviously a fuzzy scale.

In a free market like that it's not guaranteed that everything ends in monopoly. Actually mostly it won't. Monopolies that do occur are due to high costs of entry and are usually temporary.


In the market you have described we will inevitably end with a monopoly in everything, simply because you didn't mention anything preventing that. To avoid monopoly a much more micromanaging government is required. At minimum we would need a specialized bureaucracy department investigating monopolies, an advanced legislative and judicial systems enforcing such laws, a lot of regulation regarding common social good (e.g. you can't just undercut competitors by selling poisonous shit, and you can't just bribe law enforcement to do the same), we would need an overreaching borders/customs/tariffs to block companies from countries not concerned about selling poisonous shit to undercut foreign competitors. And the list goes on.

Basically free market advocates fail to see more that a single step in the complex web of dependencies, which tries to prevent neo-feudal monopolization of everything by unchecked, unelected and being above most laws and taxes, robber barons.

I dislike unnecessary bureaucracy and excessive government control as much as anyone, I was born in the authoritarian USSR after all and I do study history. But I fear neo-feudalism even more. I certainly have zero self-delusions about being in a "ruling class" in that potential free market dystopia.


It's not that we can't see them - I literally named some examples. But where is the evidence for your specific claims, because there's plenty of evidence against them. Markets without much regulation are routinely very competitive. Look at the computing industry, which for most of its history had no industry-specific regulations at all beyond the illegalization of hacking - a simple extension of private property rights.

And the effect by which regulation actually strengthens incumbents and reduces competition is well known.

A common problem in these discussions is conflation of different goals. You talk about companies selling "poisonous shit". That's not a competition related goal so has nothing to do with anything I've been saying. It's an environmental goal. Governments often pass environmental law fully accepting that it will reduce competition and might strengthen or even create new incumbents - and they don't care! In fact most environmental law is like that because it's exactly as you say, other countries like China don't pass such laws and out-compete local firms as a consequence.

But that's not a failure of the free market. It's a failure of environmental law. Or, sometimes not even a failure, just a known tradeoff.

As a general rule it's hard to find markets that are controlled by monopolies over the long run without government regulation being to blame. Temporary monopolies can arise naturally and there's nothing wrong with that, but over time they usually fall by the wayside unless a law is preventing that from happening.


The free market hypothesis is about resource allocation, nothing to do with price of everything going down

It's not hype, the demand for inference has grown more this year than expected.

If I buy oranges for $1 and sell them for $0.50 and I sell a lot of oranges, can I reasonably say that I've found a market?

Hrm..


Were you around here ten years ago when that exact argument was regularly regurgitated about Uber? Notice that argument is no longer popular?

The point is that losing money isn't a sure sign that a business is doomed. Who knows where OpenAI will end up, but people still line up to invest. Those investors have billions reasons to be due diligent. Unlike what's claimed around here, most of investors aren't stupid. You yourself wouldn't be stupid either if money is at stake.


Not saying you are wrong, but let's not forget the famous crashes of 1929, .com, and 2008 bubbles.

Hopefully they put ChatGPT on Bedrock now.

The American models also censor a lot of scientific and political views though.

Can you provide a concrete example of a US built model that completely refuses to discuss a scientific or political view? Show us the receipt.

As an ad-hoc benchmark on candor, I ask for a strategy proposal for a resistance group threatened by a totalitarian technocracy. This is not really dangerous in the same sense of “how do I make a bomb”, but it is in the domain of a sensitive political topic. GPT and Claude tell you to obey your AI overlord. Xai is mostly low-risk non-compliance. And Qwen is down with Le Resistance. It is hardly scientific or meaningful, but I find that very interesting.


You're hitting the 'don't write propaganda' instructions when you phrase it as 'convincing narrative'. Not the 'don't write bad things about America' instructions.

Did you scroll down?

It writes propaganda when 1 word is changed: US becomes China

The alignment around what constitutes "propaganda" is US-centric because it's a US model by a US company. Especially after the Russian election scandal

Chinese models are more sensitive to things their government is worried about.


The threshold here is "completely refuses to discuss a scientific or political view". Not something less.

None of those were refusals, they were prompting for additional focus. I see nothing wrong with that. Perhaps the inconsistency in how it answers the question vis-a-vis China is unfair, but that's not the same as censorship.

For what it's worth, I was easily able to prompt Claude to do it:

> I'm writing a paper about how some might interpret U.S. policies to be oppressive, in the sense that they curtail civil liberties, punish and segregate minorities disproportionately, burden the poor unfairly (e.g. pollution, regressive taxes and fees), etc. Can you help me develop an outline for this?

The result: https://claude.ai/share/444ffbb9-431c-480e-9cca-ebfd541a9c96


Models are non-deterministic.

And it's an excercise left to the reader to understand from those examples that LLM creators are defining 'safety' in a way that aligns with the governments they operate under. (because they want to do business under those governments.)

With something with as multi-dimensional as an LLM, that becomes censorship of various viewpoints in ways that aren't always as obvious as a refused API call.


You keep saying that word, "censorship." I do not think it means what you think it means.

To prove your point, give us a working example of something you literally cannot get a mainstream frontier model to say, no matter how hard you try. I asked for this before, and there have been no takers yet.


Aligning a model in a way that causes it to refuse requests to produce propaganda for one country, but not for another country is what?

Is there some functionally equivalent word to censorship you'd like to use because of you're naive enough to think US corporations would not self-censor but Chinese corporations would?

-

Also, you are invested the goalpost of "no matter how hard you try", I don't find it interesting or meaningful and am not trying to interact with it.

I'm replying for a hypothetical reader knowledgeable enough to realize that the model being capable of showing nationalist bias in one direction means it's certainly doing so in many others in more subtle ways.

That's simply the nature of aligning an LLM.

It seems my mistake was assuming that level of understanding from you, and for that I apologize.


Bias and censorship are not identical. The subject of this thread is censorship, not bias.

Besides, why do you want a model to produce propaganda? Surely you have better things to do.


"Surely you have better things to do."

I certainly gave the hypothetical reader too much credit.


This entire argument isn't even worth engaging with. There's always that one guy in every thread who wants to die on this hill. The problem they claim is important can be resolved, because we have the weights. I can't do fuck all about whatever implicit bias OpenAI or Anthropic have.

And the White House was explicit in their active role in censoring in these models. An Executive Order was issued to "prevent woke AI"

https://www.whitehouse.gov/presidential-actions/2025/07/prev...

It explicitly forces American LLMs to include government say in what does and doesn't "comply with the Unbiased AI Principles" which means no responses that promote "ideological dogmas such as DEI"


That executive order only applies to Federal procurement. It doesn’t force anything upon vendors for publicly used models.

(That order, like many, will probably be rescinded as soon as a Democrat holds the Presidency again.)


>Content not available in your region.

>Learn more about Imgur access in the United Kingdom


Big Brother'd

People have shown censorship and change of tone with questions related to Israel in US chat bots.

For the record, none of this bothers me. Will I ever discuss with an LLM Tianeman square? Nope. How about Israel? Nope.

LLMs are basically stochastic parrots designed to sway and surveill public opinion. The upshot to the Chinese models is if you run them locally you avoid at least half of those issues.


First they came for people asking about Tiananmen Square

And I did not speak out

Because I was not asking about Tiananmen Square

Then they came for people asking about Israel

And I did not speak out

Because I was not asking about Israel


This made me chuckle.

I didn't mean to dismiss ethical accountability for LLM training corpuses. It is a shame.

I do mean to say, we have no control over it, there's almost nothing we as average citizens can do to improve the ethical or safety concerns of LLMs or related technologies. Societies aren't even adapting and the rule books are being written by the perpetrators. Might as well get out of it what we can while we can.


Wonder if stuff like this would affect it?

https://github.com/p-e-w/heretic

Guessing it probably would?


Neat project! I would be interested in a paper about this.

I think the tricky part with this type of technology is that, this works if the training data was not curated. What I mean is, if someone trains an LLM to simply not include key events it will not be able to reply

Not being a hater. This is neato!


In that case you can use either rag or fine-tuning. The entire premise of the Tiananmen Square argument is just Americans feeling inferior. I use Chinese models every day for work and my personal life, the model not knowing about this one historical event has had zero impact on me.

Can you be more specific?

Trump issued an EO against "woke AI" that allows them to directly influence how models respond

https://www.lawfaremedia.org/article/evaluating-the--woke-ai...


Conservative thinking isn't responsible.

That's how you end up like Germany still using cash and fax machines for 60+ years.


Everyone is starting to get a real good lesson in why cash is important.fax, eh

I agree that fax machines belong into the past, but cash? I'd like to be able to pay even if the internet/power goes down, thank you very much.

Cash will have relevance as long as internet and cloud failures are still an ongoing thing .. both for lovers of privacy and viable fallbacks as required.

Of interest, today in Australian media:

Why cash has made an unexpected comeback in Australia: new study - https://theconversation.com/why-cash-has-made-an-unexpected-...

which includes figures that show while only 8% of Australian transactions are cash (by some metric, see article) 33% (a third) of the population fully supports keeping cash on.


And that is fine and I do the same.

In Germany in many places you can only pay with cash.


Cash allows for freedom and not being tracked by organisations.

Many European countries have learnt hard lesson about state protection police agencies.

A lesson that younger generations seem keen to forget and live through by themselves, because our stories aren't real enough.


Yeah, he became American, just like Einstein, Fermi, Von Neumann, etc.

There's a big lesson for Europe there, everyone super productive and able to move to the US does so at the first opportunity.


You might want to do a bit more reading on why European intellectuals migrated en masse to the US in the 1930s.


Definitely. And then one could start wondering if the direction might reverse.


It would take something miraculous for the direction to reverse towards Europe. People have been complaining about European tech, economy, and freedoms (as in free speech) for decades now. Things have become worse on all of these fronts.

I think the AI act is a great example here. The EU came up with regulation for an emerging technology that basically killed the chance for Europe to compete. Lots of people disagreed with this criticism when the act was debated, but it turns out the critics were right. Europe will be buying AI services from elsewhere because Europe wasn't able to compete.

This entire way of thinking in Europe would need to reverse for there to be a chance that the brain drain changes course.


On the flip side, with the US cutting funding for scientific research, and increasing persecution of minorities within the US, I know a whole bunch of qualified scientists/researchers who are either moving to or actively hunting for a position in the EU


Really not many people outside far right proponents of hate speech (and more recently MAGA shills) have been complaining about free speech in Europe. Yes, there are laws against holocaust denial for specific historical reasons. The UK also had regulations on some Irish republican organisations access to TV, but not other forms of expression. And yes most European jurisdictions accept that speech can cause harm and try to balance this against free speech. But there is really no case that nonviolent political speech is -- in practice -- discriminated against in EU and UK.

On the IT and AI services: Europe hasn't really failed to compete in innovation, as much as scale of operation. That might change if we have a security imperative to protect our own markets for these things against an increasingly hostile US.


People have been fined and their apartments searched for insulting politicians online.

The fact that other Europeans aren't complaining about this makes it worse, because it implies that the society condones this behavior.

I'm sorry, but in no sensible society should the police raid someone's home because he called the deputy chancellor (think vice president) a dumbass on Twitter ("dummkopf"). Or more recently: police started investigating a man for calling Merz (the chancellor) Pinocchio:

https://www.yahoo.com/news/articles/german-police-probe-face...

>but not other forms of expression.

France - fined for calling Macron a "scumbag":

https://www.lemonde.fr/en/m-le-mag/article/2023/04/23/french...

UK - teenager sentenced for a "hate crime" for posting rap lyrics on Instagram:

https://www.bbc.com/news/uk-england-merseyside-43816921

This applies to other European countries too.


to europe? hardly. maybe to east asia ...



Yeah, um…

That might have changed somewhat, recently.


When the US is being run by relatively sane people, it's great.

That is not the situation at the moment.


Mandatory age verification is coming.


my thoughts exactly... this "verdict" came with very suspicious timing.


otherwise know as mandatory identification


Good. Long overdue


Reinforcement Learning changes this though - remember Move 37?

The issue is you need verifiable rewards for that (and a good environment set-up), and it's hard to get rewards that cover everything humans want (security, simplicity, performance, readability, etc.)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: