Hacker Newsnew | past | comments | ask | show | jobs | submit | dotwaffle's commentslogin

That's the point, though. An SSH key gives authentication, not authorization. Generally a certificate is a key signed by some other mutually trusted authority, which SSH explicitly tried to avoid.

SSH does support certificate based auth, and it’s a great upgrade to grant yourself if you are responsible for a multi human single user system. It grants revocation, short lifetime, and identity metadata for auditing, all with vanilla tooling that doesn’t impose things on the target system.

> multi human single user system

A rather niche use-case to promote certificate auth... I'd add the killer-app feature is not having to manage authorized_keys.


They are remarkably common in long lived enterprise Linux servers. Think eg database servers or web servers where they are of the (much longer lived) pet era not cattle era.

Not sure why you need to belittle one example just to add another


Agreed, this makes sense in principle.

But what I found, empirically, is that a substantial number of observable SSH public keys are (re)used in way that allows a likely-unintended and unwanted determination of the owner's identities.

This consequence was likely not foreseen when SSH pubkey authentication was first developed 20-30 years ago. Certainly, the use and observability of a massive number of SSH keys on just a single servers (ssh git@github.com) wasn't foreseen.


You can also sign ssh host keys with an ssh ca.

See ssh_config and ssh-keygen man-pages...


I've never quite understood why there couldn't be a standardised "reverse" HTTP connection, from server to load balancer, over which connections are balanced. Standardised so that some kind of health signalling could be present for easy/safe draining of connections.


The idea is attractive (especially for draining), but once you try to map arbitrary inbound client connections onto backend-initiated "reverse" pipes, you end up needing standardized semantics for multiplexing, backpressure, failure recovery, identity propagation, and streaming! So, you're no longer just standardizing "reverse HTTP", you’re standardizing a full proxy transport + control plane. In practice, the ecosystem standardized draining/health via readiness + LB control-plane APIs and (for HTTP/2/3) graceful shutdown signals, which solves the draining problem without flipping the fundamental accept/connect roles.


Whether the load balancer connects to the server or reverse, nothing changes. A modern H2 connection is pretty much just that: one persistent connection between the load balancer and server, who initiates it doesn't change much.

The connection being active doesn't tell you that the server is healthy (it could hang, for instance, and you wouldn't know until the connection times out or a health check fails). Either way, you still have to send health checks, and either way you can't know between health checks that the server hasn't failed. Ultimately this has to work for every failure mode where the server can't respond to requests, and in any given state, you don't know what capabilities the server has.


They're 47 inches long. Amazon (UK) has 48 inch long zip ties for $14.45 (pack of 12), 60 inch long for $18. Not quite as thick or wide, sure... But that's not what was in the headline :P


It doesn't say longest either.


... fair point.


the article highlights not just the length, but the comical thickness of it too. It reminds me of those giant promotional watches for some reason...


Those long ones are great for certain…. Niche interests. That rhyme with ink.

Also, law enforcement.


Also utility plenums. Or other things. Literally available retail at Home Depot up to 36" for a dollar a tie. We use them to hold temporary fencing together.


Kink?


Come on, it's the internet, we can't swear here!


> An extra 131 GB of bandwidth per download would have cost Steam several million dollars over the last two years

Nah, not even close. Let's guess and say there were about 15 million copies sold. 15M * 131GB is about 2M TB (2000 PB / 2 EB). At 30% mean utilisation, a 100Gb/s port will do 10 PB in a month, and at most IXPs that costs $2000-$3000/month. That makes it about $400k in bandwidth charges (I imagine 90%+ is peered or hosted inside ISPs, not via transit), and you could quite easily build a server that would push 100Gb/s of static objects for under $10k a pop.

It would surprise me if the total additional costs were over $1M, considering they already have their own CDN setup. One of the big cloud vendors would charge $100M just for the bandwidth, let alone the infrastructure to serve it, based on some quick calculation I've done (probably incorrectly) -- though interestingly, HN's fave non-cloud vendor Hetzner would only charge $2M :P


Isn't it a little reductive to look at basic infrastructure costs? I used Hetzner as a surrogate for the raw cost of bandwidth, plus overheads. If you need to serve data outside Europe, the budget tier of BunnyCDN is four times more expensive than Hetzner.

But you might be right - in a market where the price of the same good varies by two orders of magnitude, I could believe that even the nice vendors are charging a 400% markup.


Yea, I always laugh when folks talk about how expensive they claim bandwidth is for companies. Large “internet” companies are just paying a small monthly cost for transit at an IX. They arent paying $xx/gig ($1/gig) like the average consumer is. If you buy a 100gig port for $2k, it costs the same if you’re using 5 GB a day or 8 PB per day.


Off topic question.

> I imagine 90%+ is peered or hosted inside ISPs, not via transit

How hosting inside ISPs function? Does ISP have to MITM? I heard similar claims for Netflix and other streaming media, like ISPs host/cache the data themselves. Do they have to have some agreement with Steam/Netflix?


Yea netflix will ship a server to an ISP (Cox, comcast, starlink, rogers, telus etc) so the customers of that ISP can access that server directly. It improves performance for those users and reduces the load on the ISP’s backbone/transit. Im guessing other large companies will do this as well.

A lot of people are using large distributed DNS servers like 8.8.8.8 or 1.1.1.1 and these cansometimes direct users to incorrect CDN servers, so EDNS was created to help with it. I always use 9.9.9.11 instead of 9.9.9.9 to hopefully help improve performance.


The CDN/content provider ships servers to the ISP which puts them into their network. The provider is just providing connectivity and not involved on a content-level, so no MITM etc needed.



I started rewriting gcsfuse using https://github.com/hanwen/go-fuse instead of https://github.com/jacobsa/fuse and found it rock-solid. FUSE has come a long way in the last few years, including things like passthrough.

Honestly, I'd give FUSE a second chance, you'd be surprised at how useful it can be -- after all, it's literally running in userland so you don't need to do anything funky with privileges. However, if I starting afresh on a similar project I'd probably be looking at using 9p2000.L instead.


I know quite a few AFS systems that moved to AuriStor's YFS: https://www.auristor.com/openafs/migrate-to-auristor/auristo...

As I understand it, it mitigated many of those issues, but is still very "90s" in operation.

I've been flirting with the idea of writing a replacement for years, about time I had a go at it!


I may be confusing two systems but I believe that AFS system was also encompassed the first iteration of “AWS Glacier” I encountered in the wild. A big storage that required queuing a job to a tape array or pinging an undergrad to manually load something for retrieval.


I know a lot of people who use it, in fact I'm one of them.

I have an @gmail.com account with about 20 years of stuff associated with it, from purchases to YouTube subscriptions, from calendars to GCP accounts.

However, I use a vanity email (me@somedomain.example) that everyone I know uses to get hold of me. Until about 10 years ago I could just forward emails but that slowly became unworkable as more and more stuff just broke due to SPF etc. So, I've been using POP pickup (and accepting the 5-30 minute delay) ever since.

As I understand it, I can't move all my gmail.com data into a GWork profile easily, and POP has worked for years. This is very frustrating.


"A number" much closer to zero than the the number of Gmail DAUs.


From a network point of view, BitTorrent is horrendous. It has no way of knowing network topology which frequently means traffic flows from eyeball network to eyeball network for which there is no "cheap" path available (potentially causing congestion of transit ports affecting everyone) and no reliable way of forecasting where the traffic will come from making capacity planning a nightmare.

Additionally, as anyone who has tried to share an internet connection with someone heavily torrenting, the excessive number of connections means overall quality of non-torrent traffic on networks goes down.

Not to mention, of course, that BitTorrent has a significant stigma attached to it.

The answer would have been a squid cache box before, but https makes that very difficult as you would have to install mitm certs on all devices.

For container images, yes you have pull through registries etc, but not only are these non-trivial to setup (as a service and for each client) the cloud providers charge quite a lot for storage making it difficult to justify when not having a check "works just fine".

The Linux distros (and CPAN and texlive etc) have had mirror networks for years that partially addresses these problems, and there was an OpenCaching project running that could have helped, but it is not really sustainable for the wide variety of content that would be cached outside of video media or packages that only appear on caches hours after publishing.

BitTorrent might seem seductive, but it just moves the problem, it doesn't solve it.


> From a network point of view, BitTorrent is horrendous. It has no way of knowing network topology which frequently means traffic flows from eyeball network to eyeball network for which there is no "cheap" path available...

As a consumer, I pay the same for my data transfer regardless of the location of the endpoint though, and ISPs arrange peering accordingly. If this topology is common then I expect ISPs to adjust their arrangements to cater for it, just the same as any other topology.


> ISPs arrange peering accordingly

Two eyeball networks (consumer/business ISPs) are unlikely to have large PNIs with each other across wide geographical areas to cover sudden bursts of traffic between them. They will, however, have substantial capacity to content networks (not just CDNs, but AWS/Google etc) which is what they will have built out.

BitTorrent turns fairly predictable "North/South" traffic where capacity can be planned in advance and handed off "hot potato" as quickly as possible, into what is essentially "East/West" with no clear consistency which would cause massive amounts of congestion and/or unused capacity as they have to carry it potentially over long distances they have not been used to, with no guarantee that this large flow will exist in a few weeks time.

If BitTorrent knew network topology, it could act smarter -- CDNs accept BGP feeds from carriers and ISPs so that they can steer the traffic, this isn't practical for BitTorrent!


> If BitTorrent knew network topology, it could act smarter -- CDNs accept BGP feeds from carriers and ISPs so that they can steer the traffic, this isn't practical for BitTorrent!

AFAIK this has been suggested a number of times, but has been refused out of fears of creating “islands” that carry distinct sets of chunks. It is, of course, an non-issue if you have a large number of fast seeds around the world (and if the tracker would give you those reliably instead of just a random set of peers!), but that really isn't what BT is optimized for in practice.


Exactly. As it happens, this is an area I'm working on right now -- instead of using a star topology (direct), or a mesh (BitTorrent), or a tree (explicitly configured CDN), to use an optimistic DAG. We'll see if it gets any traction.


bittorrent will make best use of what bandwidth is available. better think of it as a dynamic cdn which can seamlessly incorporate static cdn-nodes (see webseed).

it could surely be made to care for topology but imho handing that problem to congestion control and routing mechanisms in lower levels works good enough and should not be a problem.


> bittorrent will make best use of what bandwidth is available.

At the expense of other traffic. Do this experiment: find something large-ish to download over HTTP, perhaps an ISO or similar from Debian or FreeBSD. See what the speed is like, and try looking at a few websites.

Now have a large torrent active at the same time, and see how slow the HTTP download drops to, and how much slower the web is. Perhaps try a Twitch stream or YouTube video, and see how the quality suffers greatly and/or starts rebuffering.

Your HTTP download uses a single TCP connection, most websites will just use a single connection also (perhaps a few short-duration extra connections for js libraries on different domains etc). By comparison, BitTorrent will have dozens if not hundreds of connections open and so instead of sharing that connection in half (roughly) it is monopolising 95%+ of your connection.

The other main issue I forgot to mention is that on most cloud providers, downloading from the internet is free, uploading to the internet costs a lot... So not many on public cloud are going to want to start seeding torrents!


If your torrent client is having a negative effect on other traffic then use its bandwidth limiter.

You can also lower how many connections it makes, but I don't know anyone that's had need to change that. Could you show us which client defaults to connecting to hundreds of peers?


My example was to show locally what happens -- the ISP does not have control over how many connections you make. I'm saying that if you have X TCP connections for HTTP and 100X TCP connections for BitTorrent, the HTTP connections will be drowned out. Therefore, when the link at your ISP becomes congested, HTTP will be disproportionately affected.

For the second question, read the section on choking at https://deluge-torrent.org/userguide/bandwidthtweaking/ and Deluge appears to set the maximum number of connections per torrent of 120 with a global max of 250 (though I've seen 500+ in my brief searching, mostly for Transmission and other clients).

I'll admit a lot of my BitTorrent knowledge is dated (having last used it ~15 years ago) but the point remains: ISPs are built for "North-South" traffic, that is: To/From the customer and the networks with the content, not between customers, and certainly not between customers of differing ISPs.


Torrents don't use anything like TCP congestion control and 100 connections will take a good chunk of bandwidth but much much less than 100 TCP flows.


... What? You realise BitTorrent runs over TCP/IP right?


TCP is a fallback if it can't use https://en.m.wikipedia.org/wiki/Micro_Transport_Protocol

I should have said they avoid TCP in favor of very different congestion control, sorry.


Interesting... It's been ~15 years since I last used BitTorrent personally, and I had asked a friend before replying and they swore that all their traffic was TCP -- though perhaps that may be due to CGNAT or something similar causing that fallback scenario you describe.

Thanks for the info, and sorry for jumping to a conclusion! Though my original point stands: Residential ISPs are generally not built to handle BitTorrent traffic flows (customer to customer or customer to other-ISP-customer across large geographic areas) so the bursty nature would cause congestion much easier, and BitTorrent itself isn't really made for these kinds of scenarios where content changes on a daily basis. CDNs exist for a reason, even if they're not readily available at reasonable prices for projects like OP!


The number of connections isn’t relevant. A single connection can cause the same problem with enough traffic. Your bandwidth is not allocated on a per-connection basis.


If you download 2 separate files over HTTP, you'd expect each to get roughly 1/2 of the available bandwidth at the bottleneck.

With 1 HTTP connection downloading a file and 100 BitTorrent connections trying to download a file, all trying to compete, you'll find the HTTP throughput significantly reduced. It's how congestion control algorithms are designed: rough fairness per connection. That's why the first edition of BBR that Google released was unpopular, it stomped on other traffic.


> also now includes a hazard perception test

I took my test nearly 25 years ago, and this was present then -- for the avoidance of doubt, the UK test has always been very thorough, though not quite as thorough as those in places like Finland where apparently they have skid pans and similar!


Makes sense that Finland has such things though, when the roads are covered in snow and ice for a lot of the year.

Though this year we did good in our capital: "Helsinki has not recorded a single traffic fatality in the past 12 months, city and police officials confirmed this week."


Well done Helsinki! Unless there's a massive problem with police recording practices.


Seems like we were either side of a threshold - I took mine ~35 years ago and the only "theory" test was the examiner asking me three basic questions after the practical test, like "what can lead to skidding" to which the answer was "rapid acceleration, steering or braking". The theory side of things hardly existed essentially.


Same in Norway. Skid pans and also motorway driving. The course also includes a piece where the instructor picks a place an hour's drive away and tells the student to get there and demonstrate that they can not only drive under instruction but also plan their own route and react properly to challenges along the way.


Interestingly, I saw data from a road safety programme for young people that showed skid pan training actually made young men less safe not more, because they became even more overconfident about their ability to “react quickly” if bad things happened. Turns out that a bit of humility and slowing down are the main skills needed to avoid accidents!


That's true, on the other hand it made young women safer. This happened in Norway when the skid pan was made a compulsory part of the course a couple of decades ago and the insurance companies soon noticed an increase in reckless driving among young men but the opposite in young women.


    > an increase in reckless driving among young men but the opposite in young women
This is fascinating. Does anyone know the root cause here?


The usual 'explanation' is that young men had fun on the skid pan and came to no harm there and wanted to continue having fun on the road while young women discovered how little control they had of the car and hence became more cautious.


I’d imagine that differs from county to country. For example the risks of skidding might be higher in Finland due to the colder climate.

Whereas in the UK, black ice isn’t as common so days when it’s icy, the best advice is just to take it slow and stick to salted routes.


Just watch as most libraries now update their go.mod to say 1.25, despite using no 1.25 features, meaning those who want to continue on 1.24 (which will still have patch releases for six months...) are forced to remain on older versions or jump through lots of hoops.

It's a "minimum" version, not a dependency lock!


This is a common issue with Rust projects as well. At least with Rust you have the idea of "MSRV" (minimum supported rust version). I've never heard it discussed within Go's community.

There's no MSGV. Everyone pins the latest.

This also plagues dependencies. People pin to specific version (ie, 1.23) instead of the major version (at least 1.0 or at least 1.2, etc).


The "go x.yy" line in go.mod is supposed to be that MSGV, but `go mod init` will default it to the current version on creation. While you could have tooling like `cargo-msrv` to determine what that value would be optimal, the fact that only the latest two Go versions are supported means it's not particularly useful in most cases.


>Just watch as most libraries now update...

Haven't seen anything like this. Most packages actually have 1.13 in their go.mod

Rarely do I see at least 1.19


Now that I think about it more, when I've seen it happen before, it tends to be on projects that use dependabot / renovate. If any of those updates depend (directly or transitively) on a later version of Go, the go.mod would be bumped accordingly for them.

I have a vague feeling it was related to testcontainers or docker, and at the time that job's Go install was always at least 6 months behind. At least with recent Go, it'll switch to a later version that it downloads via the module proxy, that would have helped a lot back then :S


Never seen this happen. Most popular libraries support at least 2 previous versions


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: