Hacker Newsnew | past | comments | ask | show | jobs | submit | EGreg's commentslogin

Because it has a lot of potential for abuse.

BUT, notice the absolutely opposite approach to AI and Web3 on HN. Things that highlight Web3 scams are upvoted and celebrated. But AI deepfakes and scams at scale are always downvoted, flagged and minimized with a version of the comment:

“This has always been the case. AI doesn’t do anything new. This is a nothingburger, move on.”

You can probably see multiple versions in this thread or the sibling post just next to it on HN front page: https://news.ycombinator.com/item?id=46603535

It comes up so often as to be systematic. Both downvoting Web3 and upvoting AI. Almost like there is brigading, or even automation.

Why?

I kept saying for years that AI has far larger downsides than Web3, because in Web3 you can only lose what you volunarily put in, but AI can cause many, many, many people to lose their jobs, their reputations, etc. and even lives if weaponized. Web3 and blockchain can… enforce integrity?


At this point I think HN is flooded with wannabe founders who think this is "their" gold rush and any pushback against AI is against them personally, against their enterprise, against their code. This is exactly what happens on every vibe coding thread, every AI adjacent thread.


Mass participation in systems can create emergent effects larger than the net sum of the parts. I opt out because first movers are unfairly advantaged; and because lacking proper safeguards, my participation would implicitly support those participants who profit from producing misery. I don't want to accidentally launder the profits from human trafficking nor commit my labor to build my own prison. The rhetoric promoting Web3 as an engine of progress and freedom simply oversold the capabilities of its initial design. That underlying long term vision may still be viable.

We can't rebuild the economy without also rebuilding the State, and that requires careful nuanced engineering and then the consent of the governed.


There are plenty of posts critical of AI on HN that reach the front page, and even more threads filled with AI criticism whether on-topic or not.

What you're noticing is a form of selection bias:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


> BUT, notice the absolutely opposite approach to AI and Web3 on HN. Things that highlight Web3 scams are upvoted and celebrated. But AI deepfakes and scams at scale are always downvoted, flagged and minimized…

It took a few years for that to happen.

Plenty of folks here were all-in on NFTs.


I never understood why the journalism industry didn't go the way of wikipedia.

Britannica was the shining example of capitalism, being sold door to door. Encarta was done by Microsoft. Both got disrupted real quick by a million people making little edits to an open encyclopedia. An open-source gift economy with many contributors seems to beat capitalistic systems. Linux. Wordpress. MySQL. In general, science / wikipedia / open source projects also feature peer review before publishing, a desirable trait.

Everyone has a cellphone. It's not like we need professional cameras to capture things. What we really need is a place to post clips and discuss them in a way that features peer review. It would be better and strictly healthier than the current for-profit large corporations like Meta or X. That's one of the projects I'm building using our technology. Anyone interested, email me (email in my profile)

Compare:

1. https://www.laweekly.com/restoring-healthy-communities/

2. https://www.reuters.com/investigations/meta-is-earning-fortu...


I think you're right to a point, but that "a place to post clips and discuss them" isn't enough. The world is filled with clips that are essentially meaningless or taken out of context to say something different. In addition to aggregation and discussion, research and investigation is required in order to get the story behind the clip.

Because people have bills to pay.

The most dedicated Wikipedians in specific domains often tend to be academics in that space and whose day jobs tend to be adjacent to the niche they edit.

It's difficult to find the equivalent for local government, because the most knowledgable are already active, in the loop, and in the same circles so social ostracism is a real risk that they might be viewed as airing dirty laundry.

The number of people in a Chamber of Commerce, PTA, City Council, School Board, Rotary Club, local Library Foundation, Church Board, Teachers Union leadership, City Workers Union leadership, Police Union leadership, and a couple family offices may number in the 50-100 range, so no one is anonymous.

And finally, most local news groups are now owned by the 3rd generation of that family, and most of them have either already or are in the process of getting out of the local news business.

The reality is, if you want to make an impact in your local community (especially politically) you will have to build local relationships and become extremely active in existing cliques - playing golf at the private golf club, attending church or temple, becoming a member of the rotary club, contributing to library foundation fundraisers, become a junior member of the Chamber of Commerce, etc.

Finally, your pitch is the exact same one NextDoor back when they were a much smaller startup. Look at how that turned out. Making a Wikipedia type organization in 2026 would be nigh impossible given how decentralized the Internet has become, and how it isn't a niche platform anymore.


Why was this flagged?

It speaks negatively about AI?


Read the comments. People are hating on it because it reads like AI slop, and even if you get past that there's nothing particularly insightful.

Actually the great secret of wasm (that will piss off a lot of people on HN I am sure) is that it is deterministic and can be used to build decentralized smart contracts and Byzantine Fault Tolerant distributed systems :)

Some joker who built Solana actually thought Berkeley Packet Filter language would be better than WASM for their runtime. But besides that dude, everyone is discovering how great WASM can be to run deterministic code right in people’s browsers!


I don't think you need WASM for that, I'm sure you can write a language that transpiles to JS and is still deterministic.

They tried their best: https://deterministic.js.org/

No, WASM is deterministic, JS is fundamentally not. Your dislike of all things blockchain makes you say silly things.


But WASM already exists and has many languages that are able to compile to it, why reinvent the wheel?

JS isn't deterministic in its performance

There's a growing list of things we're told to ignore, from Trump constantly saying he'll annex Greenland, Canada, the Panama Canal, to new executive orders to chill free speech. "Take them seriously, not literally. On second thought, please don't even take them seriously..."

Umm..


Not convenient for me. I just do git diff | mate and copypaste.

Cherrypicking the most tedious parts, like boilerplate to get up and running, or porting code to other adapters (making mysqlite and postgres adapters for instance)

This was done in about 3 hours for instance: https://github.com/Qbix/Platform/tree/refactor/DbQuery/platf...

You can see the speed for yourself. Here is my first speedrun livestreamed: https://www.youtube.com/watch?v=Yg6UFyIPYNY


That code has a lot of smell IMHO :/

What's the rationale behind writing PHP as if it was JS? Unless I am mistaken it's like someone just did a transliteration from JS to PHP without even converting the JSDoc to PHPDoc.

And are there any tests for the code?


I actually based it on an existing PHP adapter for MySQL. Together with the AI, I went over all the features that needed refactoring, and what would work in Postgres and Sqlite the same way, etc. There were a ton of nuances.

So your actual output has not increased by 20-50x, just some parts of it? What's your speedup like over an entire project, not just cherrypicked parts?

I think those parts were the big bottleneck, so technically, yes it has.

Why not merged to main? What is the definition of done being applied here?

The code is stable and partially tested. Needs a lot more testing before committing it to main. This is mainly because the primary ORM adapter for MySQL has been rewritten, and a lot of apps use it.

I think in 2026 the automation will reach the testing, closing the loop. At that point, no humans in the loop will make software development extremely fast.


Wait how did this appear 3 hours ago, and also got flagged? I posted this many days ago! Something is wrong with HN timestamps.

Lots of good stuff in /newest gets missed. So, HN has an algo that selects some posts for a second chance. Looks like your post was selected for resurrection.

Yes, you can follow my code from 5 and 10 years ago here:

https://github.com/Qbix/Platform-History-v1

https://github.com/Qbix/Platform-History-v2

And you can see the latest code here:

https://github.com/Qbix

Documentation can be created a lot faster, including for normies:

https://community.qbix.com/t/membership-plans-and-discounts/...

My favorite part of AI is red-teaming and finding bugs. Just copypaste diffs and ask it for regressions. Press it over and over until it can't find any.

Here is a speedrun from a few days ago:

https://www.youtube.com/watch?v=Yg6UFyIPYNY


Yes, there is A LOT of boilerplate that is sped up by AI. Every time I interface with a new service or API, I don't have to carefully read the documentation and write it by hand (or copypaste examples), I can literally have the AI do the first draft, grok it, test it, and iterate. Often times the AI misses latest developments, and I have to research things myself and fix code, explain the new capabilities, then the AI can be used again, but in the end it's still about 20x faster.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: