Hacker Newsnew | past | comments | ask | show | jobs | submit | Rapzid's commentslogin

At normal viewing distance(let's say cinema FOV) most people won't see a difference between 4k and 8k never mind 16k.

And it's not that they "don't notice" it's that they physically can't distinguish finer angular separation.


There is no evidence TMK that the accuracy the models change due to release cycles or capacity issues. Only latency. Both Anthropic and OpenAI have stated they don't do any inference compute shenanigans due to load or post model release optimization.

Tons of conspiracy theories and accusations.

I've never seen any compelling studies(or raw data even) to back any of it up.


I love how the investigators got taken for a ride too. I heard them on NPR talking about how Altman was genuinely grappling with his "desire to please everyone" and etc etc after having just described him as someone who tells people what he thinks they want to hear..

Incredible.


This isn't all that "new" or crazy. How about Expo and React Native?

What does Expo / React native do?

Actively push you to use their build(and configuration!) service, and actively create/maintain friction for building and publishing production apps without it.

You don't even have to go to other frameworks. Open laravel.com in wayback machine. 10 years ago there were two commercial offerings as part of main navigation with at least two other under a dropdown.

High speed NVME is soaring too. Some popular Samsung kits are up 3X compared to 12 months ago.


It's not possible for two-camp believers to conceive of two-camp dwellers. That would be tantamount to a third, potentially superior camp.

The two-camp construct is a tool to establish the believer as a member of the supreme one camp group; apart from the lesser campers. Their entire identity and self worth is built around one-camp membership.


I find there are two types of people.

People who think developers fall into one of two camps.

And people worth listening to.


All the extra notice in the world wouldn't make me want to trade our tech jobs market and salaries for that of Europe's.


Those are big numbers especially for non-enterprise DBs in the 90s.

MySQL's big breakthrough(not specifically talking about perf) was innodb in 2010.

Just 15+ years ago Postgres had major issues with concurrency as we think about it today.

And just 10+ years ago a LOT of DB drivers weren't thread safe and had their own issues dealing with concurrency.

So nearly 30 years ago? Fuhgeddaboudit.


I feel like Google search results have gotten tremendously worse over the past 2 years too. It's almost like you have to use AI search to find anything useful now.

Which of course reduces traffic to sites and thus the incentives to create the content you're looking for in the first place :(


I actually think the AI Overviews from Google have improved a lot in the last 2 years. They used to be trash. And now they are often good-enough so that I do not even switch to ChatGPT anymore.

The traditional search results suffer a lot because AI and AI content generation have enable a lot of aggressive SEO/spam plays.


There’s many groups that “win” by making search results worse. It’s an ongoing battle between them, and if someone’s blaming solely Google for it, they’re way oversimplifying.


I totally agree with you, this reduces the traffic to sites but also there were lots of website that information wasn't true or correct.


Does anyone know which tool can be best used instead of Google for "classic", non-AI googling?


Pure non-AI googling will not work since many websites now use AI to create content. And so far, no search engine has managed to reliable detect and filter that out.


Kagi


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: