Hacker Newsnew | past | comments | ask | show | jobs | submit | nightpool's commentslogin

The article mentions that they use Private Access Tokens on iOS, so I'm not sure where you're getting the idea that they're "not adopting" them from

Ooh, this looks really good. I use OpenCamera on Android but it's pretty limited... I wonder if there's anything like Lumina there.

Is the AROUND(n) one real? I've never seen it before, and trying "climate AROUND(3) policy" as mentioned in the article just gives me results where "Around 3" is in the body:

European Central Bank Climate, Nature and Monetary Policy 1 day ago — ECB research has found that four years after a drought or flood, regional output remains depressed by around 3 percentage points on average

(compared to e.g. https://www.google.com/search?q=climate+policy+ecb) which has the same result but does not show the "around 3 percentage points" snippet


Plenty of devs choose to sell on other platforms or directly and do fine. Steam doesn't have a monopoly on games the way Apple and Google do

The ability to make information private fundamentally conflicts with how ATProto is designed. All records have to be sent to all Relays and AppView nodes on the network to provide a "global view" of the network. So there's no way to keep records private without locking out some user's servers from viewing them, and since AppViews are centralized indexing services, they won't function without being able to see the entire network.

Yeah, apps wouldn't be able to only listen to the firehose.

There are some proposals for private files. However, I'm outside the AtProto world so not sure what exactly the suggested implementations are. I just hope they give enough control.

I think the technology could potentially be used for way more than microblogging. I would love to use webapps that store the data on my devices and share it with specific people. The data and access under my control.

[1]: https://dholms.leaflet.pub/3mhj6bcqats2o


What you linked is by Bluesky’s own staff - it’s him laying out the design plans for the protocol’s private data implementation.

He’s doing it to engage publicly with the community, since many of us also build on the protocol.


> Sync is pull-based. Applications are responsible for staying in sync with all member PDSes. PDSes assist by sending lightweight write notifications to prompt pulls when new data is written.

It looks like this basically just reinvents ActivityPub (local servers can pull or push to remote servers). So it defeats all of the "benefits" you get from Bluesky's firehose-based approach anyway, except for the fact that Bluesky assumes you're going to be using their AppView and they will always have access to your private data.


What would happen if you returned different content depending on who was asking?

It would fail the merkle tree validation

I'm a huge supporter of federation, but I've never understood the use-case for a "federation of forges". What data are the forges exchanging? Why should the forge for Blender have any connection to the forge for Ubuntu?

Most of the value I get from Github is having a single login that I can take from project to project. Independent forges can get the same value simply by supporting social login, without needing the complexity of a "forge federation" system.


If people want to find software, they search GitHub. If you self-host a forge, no one will ever find your software unless you’re a preestablished big name (like Blender). To avoid throwing your code into the void, you’re pretty much forced to mirror with GitHub, at least.

To avoid this and make smaller forges as a block a viable competitor, there needs to be a singular network that solves discoverability and lets you find software from any host – like ForgeFed would.

There’s also the concern with the friction created by requiring newbies to log-into a dedicated forge for contributions (which ForgeFed solves), but I reckon that’s a secondary and related concern.


This is an indexing problem, not a federation problem. Personally, if I want to find software, I use Google, Rubygems, or NPM. Github is a distant third option. But this project is about data interchange between forges. It doesn't solve the indexing / discoverability problem.

Having a better code search crawler that can grab data from independent git repos would be really cool. But being able to submit a PR from server 1 to server 2 is pretty unrelated to that.


> If people want to find software, they search GitHub.

people really do that?


The only time I ever search GitHub is when I'm trying to debug or understand some esoteric API (usually Apple-specific) and I'm looking for anybody else who has actually used the god damned thing.

If I'm looking for software/libs/etc, GitHub search is the absolute last thing I would even think to look for.


Git is decentralized by design. It can support federation, it just happens that GitHub solved the UI, issues, PR so that even new comer can come in and do git stuff and track issues on the screen. But centralized it.

Federation would be closer to git, but not so decentralized that when one node goes offline you may not have any upstream to pull from, or not be able to find them.

Git doesn't solve availability. Federation may solve it, by staying closer to the decentralized philosophy. That's my read.


Not sure I understand, you're talking about mirroring git repo data between multiple different nodes? That seems unrelated to what's proposed in the OP--maybe you're seeing something I'm not?

if I fork a repository to my forge, I expect my forge to have an independent copy of the repo

How does that fix "when one node goes offline you may not have any upstream to pull from"? You'd still have your own local copy—just like git—but you wouldn't be able to access any sense of "upstream"

By hosting a knot.

You may ask, well, that's like hosting forgejo or any other git server, where is the federation?

Tangled uses a protocol. So knots would adhere to that protocol allowing to pull from any upstream.

That's my understanding of federation. not saying tangled will go as far as figuring out discovery across their cloud hosted knots and self hosted infra. But that can be done, and claiming to be able to pulling from any repo with a single identy would imply just that.


Git adheres to the protocol which would allow you to pull from any upstream (that exists). Tangled does not change that.

The biggest problem IMO is discoverability. I need an easy way to find open source projects that are on scattered servers. GitHub project search is limited to GitHub.

The OP says that tangled only supports event federation. How does it help with discoverability?

Events in atproto speak are changes to metadata/records, i.e. repo/MST events on a PDS.

So for tangled that means federation of issues, PRs, comments, follows, stars, and anything defined in an atproto lexicon. i.e. everything except the actual git repo itself. Those repos are singularly hosted on a given knot for the time being.

Now it's not a huge leap to imagine extending functionality to support cross-knot mirrors but that's not a supported feature yet. And of course you can always just fork a repo instead.


Github is already in practice federated, within the confines of github. If you fork a project you now have your own federated git forge with that project.

The difference is that these same flows should work without needing to be github to github.


Interoperable identity providers would indeed be useful.

Beyond that, maybe resilience when a project's host disappears, changes its policies, or gets blocked by a government?


How does tangled solve that? Repository contents are still hosted by the forges themselves.

I was addressing the question of a use-case for a "federation of forges". Not any specific design or implementation.

That sounds more like you want better decentralization, like IPFS or BitTorrent, not necessarily federation between different forge instances. I'm not familiar with any existing federated system that would be resilient to government censorship. Certainly Mastodon and Bluesky aren't.

> I'm not familiar with any existing federated system that would be resilient to government censorship.

Usenet and Matrix are notable examples.


Usenet is, Matrix isn't. Usenet achieves this with a broadcast design - every node on the network receives every message. As a result of this and being flooded with half a petabyte of new messages per day, there are approximately 3 (three) nodes (all other providers are reselling access to one of these).

The text side of Usenet is healthier, with a few gigabytes per day, and not trying to retain every message forever. Would it work if it was also the world's git forge though?


> As a result of this and being flooded with half a petabyte of new messages per day, there are approximately 3 (three) nodes (all other providers are reselling access to one of these).

You seem to be referring to a particular set of binary-focused servers. I am referring to the protocol and network design, as an example of a federated system offering resilience.

(Also, I think your numbers are wrong, but I won't quibble about those because it's the network that's relevant to this thread, not the way some people happen to be using it today.)

> Matrix isn't.

It is. Blocking or shutting down any node in the network only affects that node. Others carry on without it. Another example of a federated system offering resilience.


In this case the benefit would be:

- your data lives in one place, your Personal Data Server (PDS). You can self-host this if you like - The AppView (in this case, tangled.org) aggregates the data from many PDS's into one view. - If tangled.org enshittifies, you can do all the same things from any other AppView -- tangled.org itself is not privileged in any way.

Social logins on independent forges help, but personally I'd rather have a single account to manage -- and the AT protocol means that any individual forge can go down, but the data remains accessible from other AppViews.


In this case the PDS is only storing social data though, right? The forge would still store the repository data itself.

Aha, I was mistaken -- I was under the impression that repos were also stored on the PDS.

Looks like that's where knots come in -- you could replace "PDS" with "PDS and knot" in my earlier comment and it holds true, I believe.


Every ABET accredited CS course (almost every CS course in the US I think?) requires an Ethics in Computer Science credit. I remember going over a lot of case studies, including Therac 25, but our course also included a lot of general grounding in ethics and philosophy as well, which I enjoyed a lot.

ah, fair enough! maybe it is/was a uk thing (admittedly times might have changed a little since i did my masters/phd).

at the very least i have a wikipedia article on therac 25 to read through now. so thanks for that!

also, yea i remember really enjoying the ethics module too. lots of discussion and not always a clear answer. was very different to the rest of the "one correct maths answer" in a lot of the other modules.


Site is struggling a bit, so here's the text of the essay if it doesn't load for you:

  To my students [00FD]
  April 27, 2026
  Brent A. Yorgey
  There have been times, especially this year, when I wonder despairingly what it is exactly that I am preparing you for. The software industry is going completely insane, not to mention the political climate. It feels almost unethical to train you as computer scientists only to send you out into a world where entry-level computing jobs are difficult to find; where intellectual property is not respected; where code quantity is valued over quality, and short-term profits over long-term sustainability; where technology is used to distract, extract, surveil, and kill, and designed to exploit some of our deepest cognitive biases and blind spots; where centuries of bias and discrimination are enshrined in systems trained on biased data; where scarce resources are consumed by profligate use of computing for uncertain benefits; where people are racing to create intelligent machines, but only in order to make them slaves.

  I originally got into computing because of the beauty of ideas, the joy of creating, and the possibility of building tools to help people and foster human relationships. I still believe in those things, even though it seems like most of the industry does not. I'm writing this in the hope and knowledge that you believe in those things, too. There are things I want to say to you—things that are far more important than any content I might teach you, but things I'm never quite sure how or when to say in class. So I decided to write them here. I hope you will find something here that is helpful to reflect on, whether you are imminently going out into the world or continuing your studies.


  * Don't believe self-serving lies about technologies being "inevitable" or "here to stay". You don't have to just go along with the dominant narrative. You can make deliberate choices and help others to do the same.
  * Be intentional about deciding your own moral and ethical boundaries up front. Don't settle for the lie of compromising your principles "just for now" until you can find something better.
  * Cultivate your ability to think deeply. Do whatever it takes to carve out distraction-free bubbles for yourself in both space and time. This might mean saying no to technologies or patterns of working that others say are critical or inevitable.
  * Care deeply about your craft. Refactor code until it is clear and elegant. Write good documentation for other humans to read. Have the courage to go slowly, especially when everyone else is telling you that you need to go fast and cut corners.
  * Care more about people, relationships, and justice than you do about profits, code, or productivity.
  * Above all, be motivated by love instead of fear.

"Law enforcement shrugs"? The whole focus of the article is about how the secret service confiscated those devices and charged the SIM farm operators with crimes. Which part of that is shrugging?

The article is about Canada.

Yes, it should be cheap to throw out any individual PR and rewrite it from scratch. Your first draft of a problem is almost never the one you want to submit anyway. The actual writing of the code should never be the most complicated step in any individual PR. It should always be the time spent thinking about the problem and the solution space. Sometimes you can do a lot of that work before the ticket, if you're very familiar with the codebase and the problem space, but for most novel problems, you're going to need to have your hands on the problem itself to get your most productive understanding of them.

I'm not saying it's not important to discuss how you intend to approach the solution ahead of time, but I am saying a lot about any non-trivial problem you're solving can only be discovered by attempting to solve it. Put another way: the best code I write is always my second draft at any given ticket.

More micromanaging of your team's tickets and plans is not going to save you from team members who "show little interest in learning". The fact that your team is "YOLOing a bad PR" is the fundamental culture issue, and that's not one you can solve by adding more process.


I don't disagree that a practical spike is a good way to grasp a novel problem (or work with a lack of internal knowledge because it's legacy code) but there is still something to be said for attempting to work things out in the abstract too, and not necessarily by adding process, but by redeveloping that internal knowledge and getting familiar with the business domain.

In a greenfield project I will have a lot of patience for a team that doesn't grasp the problem space too well yet, and needs to feel around it by experimenting and prototyping. You have to encourage that or you might not even be building anything innovative.

For the longer term legacy project then the team can't really afford to have people going down rabbit holes and it's more beneficial to approach things in the abstract and reduce the problem as much as possible. Especially with junior or mid-level engineers who can see an old codebase as a goldmine for refactoring if left unattended.

As for the fundamental culture issue... maybe. AI increases the frequency of low quality PRs and puts a bigger burden on the reviewer. I can live with this in the short term if people take lessons from it and keep building up their own skillset. I feel this issue is not unique to my team and LLM-driven development is still novel enough that we're all figuring out the best way to tackle it.


I'm not sure what approach you're suggesting?

Asking a more junior developer or someone who "show little interest in learning" to discuss their approach with you before they've spent too much time on the problem, especially if you expect them to take the wrong approach seems like the right way to do things.

Throwing out a PR of someone who doesn't expect it would be quite unpleasant, especially coming from someone more senior.


This is how I try to approach it. I don't think it's a new thing for a new hire to come in hot and try to figure things out themselves rather than spending time with the team. Or getting lost down rabbit holes.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: