Hacker Newsnew | past | comments | ask | show | jobs | submit | mikepurvis's commentslogin

I was never a user of PyPy but I really appreciated the (successful) effort to cleanly extract from Python a layer that of essential primitives upon which the rest of the language's features and sugar could be implemented.

It's more than just what is syntax or a language feature, for example RPython provides nts classes, but only very limited multiple inheritance; all the MRO stuff is implemented using RPython for PyPy itself.


"same rack" should still be fine for 1m passive TB5 cable though, right?

Not all listens show the same intention. If I go to the barbershop and they're playing Spotify top-40 playlists running all day long, that is very different from me actively choosing what I want to listen to for a few hours a months while I'm listening in my car, or putting on Friends Per Second while doing the dishes.

My $7/mo should be going to the artists I actually chose to listen to, not the stuff that droned passively for hours in background environments. Particularly when I'm actually a high margin customer for Spotify; the cost to them of my subscription is low since I spend so little time on the service. That makes it all the more galling that my subscription cost is mostly going to Taylor Swift and Ed Sheeran.


I mean, I understand and agree, and I'm pretty sure that Spotify Premium users are very skewed towards less mainstream tastes, so I agree it would be better for smaller artists and would probably change the power balance (well, if we forget that music labels exist). But yeah, if as others pointed out you were to give 70% of your subscription cost to the artist that composed/performed the single track you listened this month, it would be very different.

It would be better.

If my listening is to Ed Sheeran for 9 hours a day and I pay $10, spotify take $3 for a platform fee, and Ed should get 10% of the rest - $7

If your listening is to Dave Smith for 1 hour a day and you pay $10, spotify take $3 for a platform fee, and Dave should get 10% of the rest - $7

That would be a fair way of distributing the revenue

But it's not. Instead Ed Sheeran gets 90% of listens and Dave Smith 10%, the listen pot is $14, so Ed gets $12.60 and Dave gets $1.40


At the end of the day, indies need to be on Spotify much more than Spotify needs them there. But for mainstream artists, it's the opposite; so the representatives of top-40 artists are the ones dictating the terms of how the system works for everyone, and unsurprisingly the system they've settled on is one that seems fair enough as long as you don't think too deeply about it, but ensures that the biggest slice of the pie goes to themselves.

Given that reality, I wonder why it is that spending in this category seems to be so much less effective in the US relative to other nations? Why is the US #22 in general quality of life [1], and the bottom of many rankings of health system performance [2]?

Speaking as a Canadian, I wonder if at least part of it is the attitude that investments in these areas are "welfare" and not simply a part of the portfolio of essential services that are delivered by the state to citizens?

[1]: https://www.usnews.com/news/best-countries/rankings/quality-...

[2]: https://www.commonwealthfund.org/publications/fund-reports/2...


It may just be my cynicism talking, but it seems that it comes down to the power of lobbyists. In the US, the healthcare companies control the government. Elsewhere, the government controls the healthcare companies.

The media narrative is a factor too. Like I have extended family, friends, and work colleagues in the US, many of whom are wealthy and well-traveled and even a lot of them will still loudly assert obviously disprovable untruths like "well at least we don't die in waiting rooms like in Canada" or "at least we don't have death panels deciding who gets to have life-saving treatment" or worst of all "ehh I mean I have good insurance, and outcomes are much better here for top-5%ers, so I don't really care about the rest of the system." All while decades of TV hospital dramas depict a well-oiled medical system delivering effective and efficient care to people with nary a whisper about how it's getting paid for.

It's got to be desperately frustrating trying to fight this kind of thinking when you've got whole communities who have never even thought to question it.

My main hope at this point is with bottom-up type efforts. Let Mamdani show people that an effective city government can fill potholes and operate a few at-cost supermarkets. Let that be the start of citizens expecting more than chainsaw-waving and twitter meltdowns for their tax dollars.


> "at least we don't have death panels deciding who gets to have life-saving treatment"

They do, it's just that in the western world the death panels work for the government and optimize for "given a certain level of cost, how do we maximize lives saved", whereas in the USA the death panels work for insurance companies, and optimize for "reduce cost". The latter is a much easier job.

Maybe point your US friends to google "united healthcare denial rates" and check that it's 33%. That's the the death panel at work.


> Speaking as a Canadian, I wonder if at least part of it is the attitude that investments in these areas are "welfare" and not simply a part of the portfolio of essential services that are delivered by the state to citizens?

Also speaking as a Canadian, I don't understand the distinction you're drawing.


The distinction is in whether there's a value judgment. Is healthcare and welfare something we assume is part of the package living in a developed nation, or is it an indulgent extra, subject to suspicion and scrutiny above and beyond what essentials like military spending get?

I would say that the mainstream Canadian view is the opposite of this. We expect healthcare funding and many are supportive of the strikes when it gets cut, but we are much more likely to treat military budget as the purchase of a lot of unnecessary toys.


"none of these are going to be supported by upstream in the way a cheap Intel or AMD desktop will be"

Going big-name doesn't even help you here. It's the same story with Nvidia's Jetson platforms; they show up, then within 2-3 years they're abandonware, trapped on an ancient kernel and EOL Ubuntu distro.

You can't build a product on this kind of support timeline.


For what it’s worth, Jetson at least has documentation, front ported / maintained patches, and some effort to upstream. It’s possible with only moderate effort and no extensive non-OEM source modification to have an Orin NX running an OpenEmbedded based system using the OE4T recipes and a modern kernel, for example, something that isn’t really possible on most random label SBCs.

Yup, I'm working a lot with Jetsons, and having the Orin NX on 22.04 is quite limiting sometimes, even with the most basic things. I got a random USB Wi-Fi dongle for it, and nope! Not supported in kernel 5.15, now have fun figuring out what to do with it.

> I want to pick up and move to another harness and/or model with minimal fuss. Buying in to things like this would make that much harder.

Yes, I expect that is very much the point here. A bunch of product guys got on a whiteboard and said, okay the thing is in wide use but the main moat is that our competitors are even more distrusted in the market than we are; other than that it's completely undifferentiated and can be swapped out in a heartbeat for multiple other offerings. How do we do we persuade our investors we have a locked in customer base that won't just up-stakes in favour of other options or just running open source models themselves?


I think they really knee capped themselves when they released Claude for Github integrations, which allows anyone to use their Claude subscription to run Claude Code in Github actions for code reviews and arbitrary prompts. Now they’re trying to back track that with a cloud solution.

Unsure if sarcastic but most ISPs will throttle and "traffic" long before you use anything close to <bandwidth rating> times <seconds in a month>.

I've been running RPI-based torrent client 24/7 in several countries and never experienced that. Eats a few TBs per month, not the full line, but pretty decent amount. I guess it really depends on the country.

I'm in the UK with Virgin Media on their 1Gbps package, going through multiple TB a month and I'm yet to be throttled in any way.

Well, multiple TB isn't close to your bandwidth rating. It only takes 2% of your connection in a single direction to hit 6TB a month.

Ha, yes I suppose that's correct.

I’ve used Spectrum and their predecessors since the 90s. Never ran into this, although the upstream speeds are ridiculously slow, and they used to force Netflix traffic to an undersized peer circuit.

I'm unsure if you're sarcastic or not, never have I've used a ISP that would throttle you, for any reason, this is unheard of in the countries I've lived, and I'm not sure many people would even subscribe to something like that, that sounds very reverse to how a typical at-home broadband connection works.

Of course, in countries where the internet isn't so developed as in other parts of the world, this might make sense, but modern countries don't tend to do that, at least in my experience.


Alas, "isn't so developed" applies to the US: https://arstechnica.com/tech-policy/2020/06/cox-slows-intern...

My parents have gotten hit by this. Dad was downloading huge video files at one point on his WiFi and his ISP silently throttled him.

A common term is "data cap": https://en.wikipedia.org/wiki/Data_cap


> Alas, "isn't so developed" applies to the US

Wow, I knew that was generally true, didn't know it was true for internet access in the US too, how backwards...

> A common term is "data cap": https://en.wikipedia.org/wiki/Data_cap

I think most are familiar with throttling because most (all?) phone plans have some data cap at one point, but I don't think I've heard of any broadband connections here with data caps, that wouldn't make any sense.


Data caps are just documenting the reality that ISPs oversubscribe - if they sell a hundred 1Gb/s connections to a neighborhood, it's highly unlikely they're peering that neighborhood onto the Internet at large at 100Gb/s. I don't know what the current standard is, but in the past it's been 10/100 to 1 - so a hundred 1Gb/s connections might be sharing 1-10Gb/s of uplink; and if usage starts to saturate that they need a way of backing off that is "fair" - data caps are one of the ways they inform the customer of such.

I've seen it with my new fiber rollout - every single customer no matter their purchased speed had 1Gb up and down - as more customers came online and usage became higher, they're not limiting anyone, but you get closer to your advertised rate - but my upload is still faster than my download because most of my neighborhood is downloading, few are uploading.


I have 5 Gbps symmetric at home. I and my fiancee both work from home, so our backup fiber connection from another provider is 2 Gbps. We can also both tether to cell phones if necessary. We can get 5G home wireless Internet here, too, and we might ditch our 2 Gbps line in favor of that as a backup. We moved from Texas back home to Illinois last year, and one of the biggest considerations was who had service at what tiers due to remote work. Some of the houses we looked at in the same three-county area in the Chicago suburbs didn’t even have 5G home available (not from AT&T, Verizon, or T-Mobile anyway).

My parents have 5G wireless home as their primary connection, and that was only introduced in their area a couple of years ago. Before that, they could get dial-up, 512 kbps wireless with about a $1000 startup cost, ISDN (although the phone company really didn’t want to sell it to them), Starlink, or HughesNet. The folks across the asphalt road from them had 20 Mbps Ethernet over power lines years ago, and that’s now I think 250 Mbps. It’s a different power company, though, so they aren’t eligible.

Around 80% of the US population lives in large urban areas. The other 20% of the population range from smaller towns to living many kilometers from any town at all. There’s a lot of land in the US.


Here in dense NYC, most apartments I've lived in have but a single ISP available. It's common to hunt for apartments by searching the address on service maps.

I'm pretty sure one landlord was cut in by his ISP, as he skipped town when I tried to ask about getting fiber, and his office locked their door and drew their shades when I went there with a technician on two occasions. The final time, we got there before they opened and the woman ran into the office and slammed the door on us.


That’s pretty common in apartments to have a single provider, especially in high rise ones or ones built before broadband was common. It’s unfortunate, but the cost of running wiring for multiple providers through old buildings can be prohibitive. The providers won’t pay to install it for a single unit. Other tenants might not like the disruption if they’re not going to use the new service. If you get a big enough block of tenants to pre-sign then it becomes a conversation more worth having for the provider and the landlords.

Our ISPs conspire to avoid competition (AKA "overbuilding") and so stuff like this just festers. It's truly a shame.

The original Boosted Boards Kickstarter video has a lot of this energy:

https://www.youtube.com/watch?v=IWV8irg64oM


Love that it's actually linked as well; too bad that user isn't still active.

In codebases where PRs are squashed on merge, the commit messages on the main branch end up being the PR body description text, and that's actually reviewed so tends to be much better I find.

And in every codebase I've been in charge of, each PR has one or more issue # linked which describe every possible antagonizing detail behind that work.

I understand this isn't inline with traditional git scm, but it's a very powerful workflow if you are OK with some hybridization.


It works until you migrate to a new system. In 5 years we are on our 3rd. I saw that at FAANG and startup alike. Then someone might dump the contents in JSON or just PDF for archival but much easier to have the commit msg have the relevant info - only relevant, losts of small details can be still on the issue and if someone really needs them can search those archives.

I personally find this to be a substantially better pattern. That squashed commit also becomes the entire changeset - so from a code archeology perspective it becomes much easier to understand what and why. Especially if you have a team culture that values specific PRs that don’t include unrelated changes. I also find it thoroughly helpful to be able to see the PR discussions since the link is in the commit message.

I agree, much as it's a loss for git as a distributed system (though I think that ship sailed long ago regardless). As far as unrelated changes, I've been finding that another good LLM use-case. Like hey Claude pull this PR <link> and break it up into three new branches that individually incorporate changes A, B, and C, and will cleanly merge in that order.

One minor nuisance that can come up with GitHub in particular when using a short reference like #123 is that that link breaks if the repo is forked or merged into something else. For that reason, I try to give full-url references at least when I'm manually inserting them, and I wish GitHub would do the same. Or perhaps add some hidden metadata to the merge commit saying "hey btw that #123 refers to <https://github.com/fancy-org/omg-project/issues/123>"


Yep - we do exactly the same with Claude. In fact - part of our PR review automation with Claude includes checking whether the PR is tightly scoped or should be split apart. I’d say in about 80% of the cases the Claude review bot is accurate in its assessment to break it up? It’s optional feedback but useful - especially when we get contributors outside our immediate team that maybe don’t know our PR norms and the kinds of things we typically aim for.

Yeah I usually default to just a straight up link or a markdown link. Mostly because I usually don’t know the exact number of a PR/ticket/issue - so it’s easy to just copy the URL once I’ve found it.


I've seen it be the concatenated individual git commit messages way too often. Just a full screen scroll of "my hands are writing letters" and "afkifrj". Still better than if we had those commits individually of course, but dear god.

The gold standard is rebased linear unsquashed history with literary commits, but I'll take merged and squashed PR commits with sensible commit messages without complaint.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: