Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Funny how how all the links, including the ones to their own pages, are routed through google.com/url, e.g. the link "Assets Available to Download". Usually tracking isn't quite this visible.


It's because their blog is hosted on blogger.com (yeah, weird decision), which is owned by Google and does that by default.


I also have a blogger.com blog.

Why? Because I had it for 20+ years, and I still didn't find an easy way to automatically migrate it to WordPress.


You're also presumably not a $400m+ company, which makes it more intestesting.


I assure you no amount of capital trivializes the endeavour of migrating to/from Wordpress.

GP speaks wisdom.


In my experience, the blog usually falls in some weird space where the marketing team owns it somehow. It’s best to leave them be and let them handle it, because if you suggest an alternative and then something goes wrong or isn’t to their liking you’ll never hear the end of it.


My point was that it's not trivial to migrate away from blogger.


Clearly engineers at Netflix have more important work to do.


It is very odd. I don’t see a good reason, not even tracking.


Aren't those just the URLs in google search results if you copy from the results page instead of clicking through to the destination?


The reason for the intermediary is because the clickthrough sends the previous URL as a referer to the next server.

The only real way to avoid leaking specific urls from the source page to the arbitrary other server is to have an intermediary redirect like this.

All the big products put an intermediary for that reason, though many of them make it a user visible page of that says "you are leaving our product" versus Google mostly does it as an immediate redirect.

The copy/paste behavior is mostly an unfortunate side effect and not a deliberate feature of it.


I don't understand. They are redirecting to their own S3 bucket, so who would be the recipient of the leak?

Also, isn't this what Referrer-Policy is for? https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...


Quoting web standards, you are more optimistic than I am, unfortunately, nobody uses them consistently or accurately (look at PUT vs POST for create / update as a really good example of this - nobody agrees) its a shame too, there's a lot of richness to the web spec. Most people don't even use "HEAD" to ensure they aren't making wasteful REST calls if they already have the data.


I was replying to

> All the big products put an intermediary for that reason

Surely whoever maintains the big products can add headers if they want?

And this is about people who care enough about not showing up in Referer headers to do something about it rather than people in general not understanding the full spec .


I worked on these big web products before and the answer then was that no, you couldn't trust it to be honored and it would have been considered a privacy incident so better off just having the redirect and having no risk. You can't trust the useragents for example.

Not sure if the reliability of the intentional mechanism has improved enough where this is just legacy or if there's entirely new reasons for it in 2026.


The other problem is if you're too big like Google, you cannot assume everyone will honor this, which is why they do these redirects.


Referrer-Policy is a response header, so in this case it would be Google sending it, and the browsers who would be honouring it. You have to hope that the browser makers get it correct... Unless I misunderstood?


Blogger predates the existence of this header by many years. Blogger, I believe, has also been in maintenance mode for many years.


It sees periodic major updates to keep it in line with standards. That's not much more than maintenance mode, but it's more than just keeping the servers running. It seems like someone at Google pays attention to it and keeps it from falling behind, but I suspect the same was true of Google Reader until it wasn't.


>someone at Google pays attention to it and keeps it from falling behind

I feel like it's the same for Google My Maps. They even discontinued the Android app, so you can only use it on the web. It totally feels like there's a single guy keeping the whole system up.


Not if you use the ClearURLs addon ;)


And when I click them I get a page with "Did you mean netflix.com? The site you just tried to visit looks fake. Attackers sometimes mimic sites by making small, hard-to-see changes to the URL." which then sends me to the Netfçix home page. Chrome on MacOS.


it's because their s3 bucket is called "download.opencontent.netflix.com.s3.amazonaws.com". the subdomain makes chrome think it's pretending to be "netflix.com"


But they said it sends them to Netfçix? That seems incorrect


...how is that even possible?


The ios gmail app does the same thing, but why? I would assume the app could just transparently relay the click through its already-open grpc channel to google's servers, and it would be faster for them and (more importantly) for me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: