Hacker Newsnew | past | comments | ask | show | jobs | submit | kmeisthax's commentslogin

"I'm kind of old school in that I believe if you put grass on the ground without a fence, people should be allowed to do whatever they want with it. The noblemen with a thousand cows seem to agree."

And that, my friends, is how you kill the commons - by ignoring the social context surrounding its maintenance and insisting upon the most punitive ways of avoiding abuse.


Context is important, but isn’t HN’s social context, in particular, that the site is entirely public, easily crawled through its API (which apparently has next to no rate limits) and/or Algolial, and has been archived and mirrored in numerous places for years already?

Signal and information are not grass.

Grass and property require upkeep. Radio waves and electromagnetic radiation do not.

I don't want your dog to piss on my lawn and kill my grass. But what harm does it cause me if you take a picture of my lawn? Or if I take a picture of your dog?

If I spend $100M making a Hollywood movie - pay employees, vendors, taxes - contribute to the economic growth of the country - and then that product gets stolen and given away completely for free without being able to see upside, that's a little bit different.

But my Hacker News comment? It's not money.

I think there are plausible ways to draw lines that protect genuine work, effort, and economics while allowing society and innovation to benefit from the commons.


Any Canadians in the room should remember this as the exact mechanism by which Nortel Networks became astronomically huge. Any time Nortel got more valuable, index funds tracking the Toronto Stock Exchange (TSE) loaded up on Nortel, amplifying the price increase. This gave the company massive amounts of capital to buy other companies with, which generated more headlines, which brought in more investor capital, which brought more index funds in. In fact, at one point Nortel was so valuable it made the TSE too homogenous to legally index, at least until Nortel lobbied Canada to change the rules regarding diversified index funds.

If you aren't Canadian (like me) you can watch this Bobbybroccoli video that explains it very well: https://www.youtube.com/watch?v=I6xwMIUPHss

Spoiler alert for the Bobbybroccoli video, but it turns out this trick doesn't work forever. And when Nortel inevitably crashed it left a good chunk of Canadians as bagholders. And looking at the stock market over the past few years, where basically all the the growth is seven companies, I'm starting to wonder if we're finally seeing America's answer to the Nortel fiasco.

(No, Lucent doesn't count, even though they're literally America's counterpart to Nortel. The key factor that made Nortel a problem was the lack of diversity in the Canadian market. Lucent crashed and burned in a field of hundreds of growing big-cap stocks, Nortel was an extremely big fish in a tiny pond.)


The issue isn't the game assets. The division between assets and code is something id invented so they could open up their old game engines while still being able to sell[0] copies of the original DOOM. But that boundary is entirely a choice of the developer, and a consequence of separability of copyright - if you make something, you can break up the rights and hand them out in different ways. Legally speaking, the only thing that matters is if any part of OpenTTD is the same as any part of Transport Tycoon.

This part gets a little confusing in software, because we have a proud history of both cultural norms and actual caselaw allowing unauthorized reimplementation of other people's copyright-bearing APIs. Applying copyright to software basically created a mutant form of patent law that lasts forever, so the courts had to spend decades paring it back by defining boundaries between the two. Reimplementation precedent is part of that boundary.

But all of that precedent relies upon software compatibility - the argument being that if you lawfully use someone else's software library to write software, you are not surrendering ownership over your own program to your library vendor, and someone else with a compatible replacement is not infringing the original library.

Legal arguments relying on reimplementation work well when the APIs in question are minimally creative and there is a large amount of third-party software that used them. The closest example would be something like Ruffle, which reimplements a Flash Player runtime that was used for a countless number of games. OpenTTD exists to reimplement precisely one game, specifically to enable a bunch of unauthorized derivative works that would be facially illegal if they had been applied directly to the TTD source code. This wouldn't fly in court.

In court, OpenTTD would be judged based on substantial similarity between its code and Transport Tycoon's code. While copyright does not apply to game rules, and cloning a game is legal[1], I am not aware of any effort in OpenTTD to ensure their implementation of those rules is creatively distinct from Transport Tycoon's. In fact, OpenTTD was forked from a disassembly of the latter, which is highly likely[2] to produce substantial similarity.

tl;dr I'm genuinely surprised Atari didn't sue them off Steam!

[0] Translation for pedants: "have a monopoly on selling". In the creative biz, two people generally don't make money selling the same thing.

[1] Trade dress and trademark lawsuits notwithstanding - The Tetris Company has done an awful lot of litigation on that front.

[2] The standard way to avoid this is clean-room reverse engineering. It's not a legal requirement, of course, but it helps a lot.


It might be challenging to show "substantial similarity" between assembly and C++ codebase after 20 years of evolution.

Unfortunately thanks to cases like <https://en.wikipedia.org/wiki/Tetris_Holding,_LLC_v._Xio_Int...> there doesn't have to be any direct code overlap for a game to be in violation.

This is extremely light on details, but I'm pretty sure "Right to Compute" has absolutely nothing to do with software freedom and everything to do with making it harder to oppose giant datacenter buildouts for AI companies, so they can blast you with infrasound, spike the price of electricity and RAM, and build surveillance systems to take away your rights.

Well they do define compelling government interest to include

> "Compelling government interest " means a government interest of the highest order in protecting the public that cannot be achieved through less restrictive means. This includes but is not limited to: (a) ensuring that a critical infrastructure facility controlled by an artificial intelligence system develops a risk management policy; (b) addressing conduct that deceives or defrauds the public; (c) protecting individuals, especially minors, from harm by a person who distributes deepfakes and other harmful synthetic content with actual knowledge of the nature of that material; and (d) taking actions that prevent or abate common law nuisances created by physical datacenter infrastructure.

D seems to address that potentially.


My thoughts exactly. I reads a lot like they are trying to minimize the state's power to regulate AI. I'm not sure that's such a good thing. Regulation is one of the only ways that we can manage the ``bads'' that come with any new technology. In the US, we've never been very good at regulating new technologies before industry stakeholders entrench themselves in the lobbying circuit.

Proactively shielding themselves from the eventual, justified, realization that spiking a population's price of water and electricity such that they cannot use them IS an externality just as bad as polluting the water supply.

Yeah, I have relatives in Ashburn VA with over 200 data centers running and it's practically uninhabitable /s

This is also why Rust has separate PartialEq and Eq traits - the latter is only available for types that don't have weird not-self-equal values like floating point NaNs or SQL NULLs. If you lie to Rust and create a wrapper type over f32 or f64 that has Eq, then you'd get unindexable NaN keys that just sit in your hashmap forever.

The real surprise to me is that Python can index NaN keys sometimes, at least by reference to the original NaN. I knew CPython does some Weird Shit with primitive values, so I assume it's because the hashmap is comparing by reference first and then by value.


> unindexable NaN keys that just sit in your hashmap forever.

Although you shouldn't put nonsense in a HashMap, HashMap::drain will give you (an iterator to get) back everything you put into the HashMap even if it's nonsense and HashMap::retain will let you provide a predicate to throw away the nonsense if you want to keep everything else, while HashMap::extract_if even allows you to get the nonsense back out again to put that somewhere more sensible.

These work because they don't need to find your nonsense in particular using hashing, which may now be impossible, they're just looking at everything.


Is this Trump's "discombobulator"?

Yes.

In ARM, TrustZone[0] is a higher level of privilege than hypervisors (EL3 vs. EL2); it's morally equivalent to x86 System Management Mode. That means it categorically can steal your data. There's nothing EL2 code can do to prevent inspection or manipulation from a malicious EL3.

A less awful design would have been to keep the security code at EL2 and have CPU hardware that can isolate two EL2s from one another[1]. This is ultimately what ARM wound up doing with S-EL2, but you still need to have EL3 code to define the boundary between the two. At best the SoC vendor can design a (readable/auditable!) boot ROM that occupies EL3 and enforces a boundary between secure and non-secure EL2s.

[0] Or, at least, TrustZone's secure monitor. TZ can of course run secure code at lower privilege levels, but that doesn't stop a TZ compromise from becoming a full system compromise.

[1] If you're wondering, this is morally equivalent to Apple's guarded exception levels.


I mean, considering that no quantum computer has ever actually factored a number, a speedup on tiny numbers is still impressive :P

I didn't get the quantum hype last year. At least with AI, you can see it do some impressive things with caveats, and there are bull and bear cases that are both reasonable. The quantum hype training is promising the world, but compared to AI, it's at the linear regression stage.

It's a variation of nerd snipe. https://xkcd.com/356/

People get taken by the theoretical coolness and ultimate utility of the idea, and assume it's just a matter of clever ideas and engineering to make it a reality. At some point, it becomes mandatory to work on it because the win would be so big it would make them famous and win all sorts of prizes and adulation.

QC is far earlier than "linear regression" because linear regression worked right away when it was invented (reinvented multiple times, I think). Instead, with QC we have: an amazing theory based on our current understanding of physics, and the ability to build lab machines that exploit the theory, and some immediate applications were a powerful enough quantum computer built. On the other side, making one that beats a real computer for anything other than toy challenges is a huge engineering challenge, and every time somebody comes up with a QC that does something interesting, it spurs the classical computing folks to improve their results, which can be immediately applied on any number of off-the-shelf systems.


> People get taken by the theoretical coolness and ultimate utility of the idea, and assume it's just a matter of clever ideas and engineering to make it a reality. At some point, it becomes mandatory to work on it because the win would be so big it would make them famous and win all sorts of prizes and adulation.

Good description. Commercial fusion power seems to be in the same category currently.

The next step once you have enough thinkers working on the problem is to start pretending that commercial success is merely a few years away, with 5 or 10 years being the ideal number.


Quantum computing is cool, but a lot of the people who were hyping it last year were absolute charletons. They were promosing things that quantum computers couldn't even do theoretically let alone next year. Even the more down to earth claims were stuff we are still 10-40 years away from presented as if its going to happen next month.

Quantum computers are still cool and things worthy of research. Its going to be a very long road though. Where we are with quantum computers is like equivalent to where we were with regular computers in the 1800s.

The hype people just make everything suck and should be ignored.


The only things I'm aware of that I consider actual problems it solves are "it breaks classical encryption" and "you may be able to use it to directly model other quantum systems like for protein folding and such".

Everything else I consider pretty silly. "It can improve logistics" - I'm fairly sure computers are already as good as they can be, what dominates logistics calculations isn't an inability to optimize but the fact the real world can only conform so closely to any model you build. "It can improve finance" - same deal, really. All the other examples I see cited are problem where we've probably already got running code that is at the noise floor imposed by reality and its stubborn unwillingness to completely conform to plans.

If I had $1 to invest between AI and quantum computing I'd end up rounding the fraction of a cent that should rationally go to quantum computing and put the whole dollar in AI.

By far the most exciting possibility is one that Scott Aaronson has cited, which is, what if quantum computers fail somehow? To put it in simple and unsophisticated terms, what if we could prove that you can't entangle more than 1024 qubits and do a certain amount of calculation with them? What if the universe actually refuses to factor a thousand-digit prime number? The way in which it fails would inevitably be incredibly interesting.


It does not even break classical encryption (though classical encryption needs higher security margins if attacks using quantum computers are possible).

It breaks only classical public-key encryption.

Public-key encryption is not necessary within a closed organization, e.g. for the personal use of an individual or group of individuals, or within a spy agency or for military applications, though it can make slightly simpler the process of key distribution, which otherwise needs an initial physical pairing between devices.

The most important application of public-key encryption is for allowing relations between parties who have never met in person, by the use of digital signatures and of Diffie-Hellman key establishment protocols.

This has been essential to enable online shopping and online banking, but not for the more traditional uses of cryptography.


The problem is that it's an exponential slowdown on large numbers.

Hey hey, 15 = 3*5 is factoring.

my understanding is that they factored 15 using a modular exponentiation circuit that presumes that the modulus is 3. factoring 15 with knowledge of 3 is not so impressive. Shor's algorithm has never been run with a full modular exponentiation circuit.

The very first demonstration of factoring 15 with a quantum computer, back in 2001, used a valid modular exponentiation circuit [1].

The trickiest part of the circuit is they compile conditional multiplication by 4 (mod 15) into two controlled swaps. That's a very elegant way to do the multiplication, but most modular multiplication circuits are much more complex. 15 is a huge outlier on the difficulty of actually doing the modular exponentiation. Which is why so far 15 is the only number that's been factored by a quantum computer while meeting the bar of "yes you have to actually do the modular exponentiation required by Shor's algorithm".

[1]: https://arxiv.org/pdf/quant-ph/0112176#page=15


would other mersenne numbers admit the same trick? if so, factoring 2047 would be really interesting to see. it's still well within the toy range, but it's big enough that it would be a lot easier to believe that the quantum computer was doing something (15 is so small that picking an odd number less than sqrt(15) is guaranteed to be a correct factorization)

No, 15 is unique in that all multiplications by a known constant coprime to 15 correspond to bit rotations and/or bit flips. For 2047 that only occurs for a teeny tiny fraction of the selectable multipliers.

Shor's algorithm specifies that you should pick the base (which determines the multipliers) at random. Somehow picking a rare base that is cheap to do really does start overlapping with knowing the factors as part of making the circuit. By far the biggest cheat you can do is to "somehow" pick a number g such that g^2=1 (mod n) but g isn't 1 or N-1. Because that's exactly the number that Shor's algorithm is looking for, and the whole thing collapses into triviality.


You can also get a dog to factor 15, see pages 9-11 of this paper:

https://news.ycombinator.com/item?id=44608622 - Replication of Quantum Factorisation Records with a VIC-20, an Abacus, and a Dog (2025-07-18, 25 comments)


Furthermore, in most copyright lawsuits that nerds like us actually care about (i.e. ones involving service providers and not actual artists or publishers), the number of works infringed is so high that the judge can just work backwards from the desired damage award and never actually hit the statutory damages cap. If the statutory damages limit was actually reached in basically any intermediary liability case, we'd be talking about damage awards higher than the US GDP.

Linear arithmetic is one hell of a drug.


That makes running a seedbox sound like a threat of global economic mass destruction.

Or said differently: the law is stupid

The Overton window has not shifted, at least not among rank-and-file tech workers. There was very loud and vocal internal opposition to building and selling weapons[0]. They all lost the argument in the boardrooms because the US government writes very big checks. But I am told they are very much still around.

CEOs are bound to sociopathically amoral behavior - not by the law, but by the Pareto-optimal behavior of the job market for executives. The law obligates you to act in the interests of the shareholders, but it does not mandate[1] that Line Go Up. That is a function of a specific brand of shareholder that fires their CEOs every 18 months until the line goes up.

In 2007, Big Tech had plenty of the consumer market to conquer, so they could afford to pretend to be opposed to selling to the military. But the game they were playing was always going to end with them selling to the military. Once they were entrenched they could ignore the no-longer-useful-to-us-right-now dissenters, change their politics on a dime, and go after the "real money".

[0] Several of the sibling comments are mentioning hypothetical scenarios involving dual-use technologies or obfuscated purposes. Those are also relevant, but not the whole story.

[1] There are plenty of arguments a CEO could use to defend against a shareholder lawsuit that they did not take a particularly short-sighted action. Notably, that most line-go-up actions tend to be bad long-term decisions. You're allowed to sell low-risk investments.


Complaining loudly about working with the government to build weapons and then continuing to build them isn't the same as people refusing to work for companies that handle weapons contracts. The window has indeed shifted, with tech workers now merely virtue signaling on social media.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: