Tried to use voice cloning but in order to download the model weights I have to create a HuggingFace account, connect it on the command line, give them my contact information, and agree to their conditions. The open source part is just the client and chunking logic which is pretty minimal.
One key thing to understand about TigerBeetle is that it's a file-system-backed database. Static allocation means they limit the number of resources in memory at once (number of connections, number of records that can be returned from a single query, etc). One of the points is that these things are limited in practice anyways (MySQL and Postgres have a simultaneous connection limit, applications should implement pagination). Thinking about and specifying these limits up front is better than having operations time out or OOM. On the other hand, TigerBeetle does not impose any limit on the amount of data that can be stored in the database.
It's always bad to use O(N) memory if you don't have to. With a FS-backed database, you don't have to. (Whether you're using static allocation or not. I work on a Ruby web-app, and we avoid loading N records into memory at once, using fixed-sized batches instead.) Doing allocation up front is just a very nice way of ensuring you've thought about those limits, and making sure you don't slip up, and avoiding the runtime cost of allocations.
This is totally different from OP's situation, where they're implementing an in-memory database. This means that 1) they've had to impose a limit on the number of kv-pairs they store, and 2) they're paying the cost for all kv-pairs at startup. This is only acceptable if you know you have a fixed upper bound on the number of kv-pairs to store.
As a tiny nit, TigerBeetle isn't _file system_ backed database, we intentionally limit ourselves to a single "file", and can work with a raw block device or partition, without file system involvement.
>we intentionally limit ourselves to a single "file", and can work with a raw block device or partition, without file system involvement
those features all go together as one thing. and it's the unix way of accessing block devices (and their interchangeability with streams from the client software perspective)
You’re defending a weaker system than the actual system.
The system you’re defending is a list of flagged plate numbers and a way of comparing seen plates against that list, and a way of reporting matches to the local police.
The actual system logs all cars seen, saves the information forever, and reports the data to a third party who can share it with anyone they want.
I also found the Zulip UX to be really confusing at first. The issue is messages show up in multiple places which is unintuitive for someone with a spacial brain like me. What I do (because I use Zulip every day) is read messages only in their threads. I click on one thread in the sidebar, get caught up, then move to the next thread. (This is also how I use Discord and Slack.) So I treat it as if channels contain threads which contain messages.
But Zulip’s default view is a list of all messages in all threads in all channels which has no context for the individual messages, like
Zulip's product lead here. Yep, reading messages thread by thread is the recommended way for most folks. (There's even a keyboard shortcut for going to the next one.) The inbox view, which lists the threads where you have unread messages, is the default home view (unless your org admins changed that setting).
The combined feed is helpful for some (e.g., in lower-traffic organizations, or if you like to see messages as they come in), and was the default home view many years ago.
I seem to remember seeing this a week or two ago, and it was very obviously AI generated. (For those unfamiliar with Zig, AI is awful at generating Zig code: small sample dataset and the language updates faster than the models.) Reading it today I had a hard time spotting issues. So I think the author put a fair amount of work into cleaning up hallucinations and fixing inaccuracies.
> We really do want to make all design, including professional design, as widely accessible as possible
In the lead up to this launch, for the last month, Serif products were unavailable for purchase, leaving me unable to open the document that I created while on a free-trial. It would be dumb of me to create more documents in the proprietary affinity format, because there's nothing stopping you from deciding to do some other marketing stunt that involves removing my access to open my documents in the future.
I'm advocating for open source not as "moving the goal post" but as the ONLY thing that guarantees that I have the right and ability to continue running the software on my own device.
The app that you’re supposed to use for persistent, unnamed, always open documents is obviously Stickies. Try it out by using cmd+shift+Y in any application to add selected text to a new sticky.
(I’m kidding, I’ve never intentionally used this macOS feature.)
I actually used this a lot back in snow leopard, but I think it’s gone now. It was actually really nice to have little persistent post-it notes floating around.
Does it still exist? I haven’t seen it or heard of it in years.
This is only a win for Ruby Central. They haven't conceded anything and they've convinced Ruby Core to endorse them as the correct and true maintainers of RubyGems.
> While repository ownership has moved, Ruby Central will continue to share management and governance responsibilities for RubyGems and Bundler in close collaboration with the Ruby core team.
Andre has previously maintained that he owns a trademark on Bundler and he will enforce it against Ruby Central.
So Ruby Central transfers "ownership" of Bundler to Ruby Core. Ruby Central gets to continue to maintain Bundler, and Ruby Core is stuck with the liability. If Andre wants to enforce his trademark, he now has to sue Japan-based Ruby Core and risk the bad optics of that.
Three years ago I was very skeptical of Ladybird. But two things have changed. First, they have funding for 8 full time engineers, which I definitely wasn’t expecting. Second, it’s been three years. So given that, I am more optimistic.
There’s still a very long way before they can compete with Chrome, of course. And I’m not sure I ever understood the value proposition compared to forking an existing engine.
The value proposition is not having vendor lockin and having WebKit/Blink be the defacto behaviour. For example the Ladybird team have found and raised spec issues in the different specs.
Another example is around ad blockers -- if Blink is the only option, they can make it hard for ad blockers to function whereas having other engines allows different choices to be made.
>The value proposition is not having vendor lockin
there by definition is no vendor lock-in by forking an open-source engine. The worst case is the original maintainers going evil tomorrow and you being on your own, which is no worse than starting from scratch, except you saved yourself some ten million odd lines of mindless spec implementation in the case of a browser.
I’m not an expert in this field, but I don’t think I agree. The problem with a browser monopoly is that the monopolist does not have to obey specs — you can just do whatever you want, and force the specs to follow you.
If you fork that monopolist’s engine, you’re not making any immediate difference to the market. You’ll adopt all their existing behavior, whether or whether not it conforms to spec (and I would guess you would continue to pull in many of their changes down the road).
A brand new implementation is much more difficult, but if it works it’s much more meaningful in preventing a monopoly.
The issue is around maintenance/development burden. For example, when manifest V2 was dropped in favour of manifest V3 it is possible for a downstream project (Edge, etc.) to maintain V2 support. However, that gets harder the further along the projects go and the code diverges; that may mean keeping more code around (if interfaces or utility classes are changed/removed), or rewriting the support if the logic changes (such as the network stack).
It's like projects trying to keep Firefox XUL alive, or GTK+ 2 or 3.
The project has now moved from just updating the external dependency to working on that and possibly actively fighting against the tide. That is a lot harder and requires more work each time you update the dependency.
So in effect you have vendor lock-in. And if the vendor controls or affects downstream products like plugin developers (targeting manifest V3) or application developers (targeting GTK+ 3 or 4) then its even harder to maintain support for the other functionality.
That’s certainly an advantage, but I’m not sure that’s the value proposition.
It’s that Chrome and V8’s implementation has grown to match resourcing. You probably can’t maintain a fork of their engine long-term without Google level funding.
reply