Their GUI system (GPUI) is not very mature for use outside of Zed. GPUI is basically a UI framework in the truest sense: a framework for building UI... frameworks/components. It has core functionality for async execution, an ECS for grabbing shared resources, and a div.
It's basically like building a website with div and basic CSS.
Up until sometime late 2025 GPUI wasn't even on crates.io, and it seems like the GPUI-component ecosystem still promotes using git deps. It was also in "read the code for docs" state for a very long time
It's been a while since I've used it, but there were weird things missing too like the Scollbar was located in Zed's UI component crates instead of core GPUI. Arbitrary text selection also is not possible, which is something I really value about egui.
Iroh's protocol can figure out if the devices are on the same LAN and avoid going over the internet. It can work without a discovery server too -- i.e. completely LAN.
>At the same time, the larger tech companies (Meta and Google, specifically) ended up building off of hg and not git because (at the time, especially) git cannot scale up to their use cases.
Fun story: I don't really know what Microsoft's server-side infra looked like when they migrated the OS repo to git (which, contrary to the name, contains more than just stuff related to the Windows OS), but after a few years they started to hit some object scaling limitations where the easiest solution was to just freeze the "os" repo and roll everyone over to "os2".
They wrote something that allowed them to virtualize Git -- can't remember the name of that. But it basically hydrated files on-demand when accessed in the filesystem.
The problem was I think something to do with like the number of git objects that it was scaling to causing crazy server load or something. I don't remember the technical details, but definitely something involving the scale of git objects.
Probably a lot of Googlers don't know. It's ancient history, was called google3 even in 2006 when I first joined.
google1 = code written by Larry, Sergey and employee number 1 (Craig). A hacky pile of Python scripts, dumped fairly quickly.
google2 = the first properly engineered C++ codebase. Protobufs etc were in google2. But the build system was some jungle of custom Makefiles, or something like that. I never saw it directly.
google3 = the same code as google2 but with a new custom build system that used Python scripts to generate Makefiles. I suppose it required a new repository so they could port everything over in parallel with code being worked on in google2. P4 was apparently not that great at branches and google3 didn't use them. Later the same syntax for the build files was kept but turned into a new languages called Starlark and the Makefile generator went away in favor of Blaze, which directly interpreted them.
Part of the idea behind stacked PRs is to keep your commits focused and with isolated changes that are meaningful.
A stacked PR allows you to construct a sequence of PRs in a way that allows you to iterate on and merge the isolated commits, but blocks merging items higher in the stack until the foundational changes are merged.
Stacked PRs tend to encourage a series of well-organized commits, because you review each commit separately, rather than together.
What they do that the single branch cannot is things like "have a disjoint set of reviewers where some people only review some commits", and that property is exactly why it encourages more well-organized commits, because you are reviewing them individually, rather than as a massive whole.
They also encourage amending existing commits rather than throwing fixup commits onto the end of a branch, which makes the original commit better rather than splitting it into multiple that aren't semantically useful on their own.
I think the point the GP was trying to make is that the GitHub UI ought to be able to allow you to submit a branch with multiple well-organized commits and review each commit separately with its own PR. The curation of the commits that you'd do for stacked PRs could just as easily be done with commits on a single branch; some of us don't just toss random WIP and fixup commits on a branch and leave it to GitHub to squash at the end. I.e., it's the GitHub UI rather than Git that has been lacking.
(FWIW, I'm dealing with this sort of thing at work right now - working on a complex branch, rewriting history to keep it as a sequence of clean testable and reviewable commits, with a plan to split them out to individual PRs when I finish.)
> I think the point the GP was trying to make is that the GitHub UI ought to be able to allow you to submit a branch with multiple well-organized commits and review each commit separately with its own PR.
That's what this feature is, conceptually. In practice, it does seem slightly more cumbersome due to the fact that they're building it on top of the existing, branch-based PR system, but if you want to keep it to one commit, you can (and that's how I've been working with PRs for a while now regardless, honestly).
They confirmed in other comments here that you don't have to use the CLI, just like you don't have to use gh in general to make pull requests, it's just that they think the experience is nicer with it. This is largely a forge-side UI change.
> I think the point the GP was trying to make is that the GitHub UI ought to be able to allow you to submit a branch with multiple well-organized commits and review each commit separately with its own PR
So the point he's trying to make is that Gituhub UI should support Stacked PRs but call them something else because he doesn't like the name?
I think Pijul has some good ideas, but I’m afraid the network effect of git at this point is too strong.
I think jj’s concept of being a front end for many backends and sharing a common UX over them is a good one, but without a pijul backend for existing tools I have a hard time seeing it catch on.
It's not something to over-index on, but it's not a strong protection measure. It simply raises the overall cost to attack and analyze a system.
Take the PS5 for example. It has execute-only memory. Even if you find a bug, how do you exploit it if you can't read the executable text of your ROP/JOP target?
Hi, quick note on "For modern Xbox platforms, public 2024 work exposed SystemOS kernel exploitation on both Xbox One and Xbox Series"
I'm a former Xbox hacker, then former Microsoft employee, and (long after) leaving Microsoft helped with the Collateral Damage post-exploitation payload.
The design of the Xbox One security predates me, but Microsoft has always known that SystemOS would be a weak link that would almost guaranteed to be compromised and shoved most of their attack surface that can be trivially attacked in there. The system shell, 3rd-party apps, guide, etc. all run in SystemOS.
The key things they focused on though were:
1. Extremely strong defense-in-depth
2. Making full or partial exploitation not economical
3rd party apps and the web browser were seen as being obviously untrusted _and_ needed JIT because they'd mostly be based on .NET or the JS VM. But practically speaking there should be nothing interesting in that VM: its compromise shouldn't enable piracy/cheating and ideally shouldn't leak game plaintext.
What some others found though was that for some reason plaintext was actually visible to SystemOS, but didn't enable piracy on console. You can take those games though and run them on PC using XWine1: https://github.com/xwine1
Technically speaking there's no reason why Collateral Damage couldn't have happened waayyyyy earlier in the Xbox One's lifecycle except for motivation. Even still you could probably take some Hyper-V N-day and compromise HostOS through.
Over there years there have been other "exploits" too: some folks have managed to tamper with gamesaves via cloud connected storage and other shenanigans, XSS in the system shell (some of these apps are JS), etc., but most of this was relatively benign and easily patchable. And there has been a very, very small group of people with similar but less capable exploits to Collat.
That explains things. Im getting this:
API Error: 400 {"error":{"message":"Budget has been exceeded! Current cost: 271.29866200000015, Max budget:
200.0","type":"budget_exceeded","param":null,"code":"400"}}
So I completetly ran out of tokens and haven’t even used it at all for the past couple of days, and last week my usage was very light. Let me scratch that, all my usage has been very light since I got this plan at work. It’s a an enterprise subscription I believe, hard to tell since it doesn’t connect directly to Anthropic, rather it goes through a proxy on Azure.
Im not liking this at all and all, so flaky and opaque. Not possible to get a breakdown on what the usage went on, right? Do we have to contact Anthropic for a refund or will they restore the bogus usage?
I completely agree that requests are what should be charged for. But I think there are two things, given that requests aren't all going to cost the same amount:
1. Estimate free invoicing the requests and letting users figure it out after the fact.
2. Somehow estimating cost and telling users how much a request will cost.
im fiarly certain the knob on the machine that controls length of redundant comments and docblocks is cranked to 11. it makes me curious how much of their bottom line is driven by redundant comment output.
I maintain some tools for the videogame World of Warships. The developer has a file called GameParams.bin which is Python-pickled data (their scripting language is Python).
Working with this is pretty painful, so I convert the Pickled structure to other formats including JSON.
The file has always been prettified around ~500MB but as of recently expands to about 3GB I think because they’ve added extra regional parameters.
The file inflates to a large size because Pickle refcounts objects for deduping, whereas obviously that’s lost in JSON.
I care about speed and tools not choking on the large inputs so I use jaq for querying and instruction LLMs operating on the data to do the same.
It's basically like building a website with div and basic CSS.
gpui-component exists: https://github.com/longbridge/gpui-component
Up until sometime late 2025 GPUI wasn't even on crates.io, and it seems like the GPUI-component ecosystem still promotes using git deps. It was also in "read the code for docs" state for a very long time
It's been a while since I've used it, but there were weird things missing too like the Scollbar was located in Zed's UI component crates instead of core GPUI. Arbitrary text selection also is not possible, which is something I really value about egui.
reply