This is great news! MOBI and especially KF8 (Amazon flavour of XHTML on MOBI) are horribly complex by virtue of them being additional layers over the PalmDB format, which on it's own is actually quite elegant.
I've spent quite some time on building a fully featured MOBI library[1]. A bit sad that my work will become obsolete this quickly, but definitely good in the long run.
I was just looking for a replacement library for go to build documents for the Kindle. I have a little service that converts RSS feeds articles into content for ebook readers and currently for them I was converting to mobi. It's great to see that there's an "easy" way to switch to AZW.
Some CDC implementations I have seen use a desired "average" chunk size value in addition to a minimum and maximum value. If the chunk exceeds the desired average size, the test for recognizing a byte sequence as a stop becomes more forgiving. Other solutions also retry previously processed sequences using the simpler threshold.
However, from what I've seen, these methods generally come at the cost of deduplication and/or speed. The most reliable method to avoid pathological cases seems to just be setting the min/max chunk size to a low/high enough value respectively.
If you're talking in a purely theoretical sense, I would assume that the possibility of changes affecting non-local chunks is inherent to CDC. With well-chosen parameters the likelihood of any but the closest chunks being affected just becomes low enough to be negligible.
The problem is that pathological cases are things like a repeating pattern (or repeating byte). Another issue is deliberate attacks: if a Dolt user can craft datasets for which single row changes translate to a duplication of the entire tree (and dataset), this becomes an obvious DOS vector for hosted Dolt platforms.
From what I've seen the likelihood of triggering a pathological case with real-world non-malicious data is actually low enough to be ignored, given that the rolling hash function is well-crafted. I do agree that crafting malicious data to break deduplication in Dolt should be relatively easy, but I do not see how this could lead to DOS on e.g. a hosted Dolt platform. If I understand correctly, your proposed attack would only affect the rate of deduplication and by extension disk space used, and I would expect a hosted Dolt platform to have strict disk-space limits or use storage-based billing.
I don't know about BuildKit and `docker buildx`, but I've been using an approach similar to what is outlined in paragraph two to generate static executables for one of my projects.
I use a multistage build to generate a podman/docker image that contains all my build artifacts and then just copy them out of the container onto my host system. What advantages would there be in using BuildKit for this sort of thing?
I remember introducing a middle schooler to programming using Scratch at my highschool's open day. I had spent a few hours making a simple two player shooting game and a maze generator for the IT class display. Most kids just wanted to play the games, but seeing just one of them be genuinely excited about the possibility of creating his own interactive experiences, asking questions about how certain mechanics were implemented, what I did to get to this point, if I thought he would be able to create similar things, was really encouraging. Probably didn't make a difference in the long run, but still a treasured memory for me.
I'm personally curious how the Next is able to achieve the claim of "zero client-side JavaScript" mentioned here[1] using react server components? It just doesn't seem to make sense to me, and the HN clone example and my barebones test project also clearly still load about 74.2 KB of JavaScript. Is the claim supposed to mean that the server components won't require additional JS, or maybe that they won't need to execute any client-side JS to be fully rendered?
I think it means that any page that only renders server components can just push the built html to the browser. If there were to be client components, like an upvote button, react would be required for that to hydrate the DOM.
It’s very similar to what Astro is doing. Only adding the JS if it’s actually necessary.
From my understanding from watching the keynote earlier today, the reason that demo has JS is that repo opted in to have client side code to handle the upvote functionality. If it didn't require that interactivity it could be shipped with no client side JS. The client side js code is defined in the component files ending in .client.js
Thank you for your response! However, the linked content further seems to support my suspicion that having websites without any client-side JS is not in the scope of RSCs. The linked RFC, in my opinion, clearly states for it to be expected that a client-side framework and React accept and handle the streamed React response[1]. So while it may be possible to eliminate a lot of client-side JS, at least those would always have to be available on the client, correct?
I would also like to make it understood that I'm not here to bash the Next project, I am simply interested in the technology.
This is a really cool project. It's unfortunate to see the author being this obviously frustrated by having Sixel-related contributions be rejected time and time again. However it seems like the main Tmux maintainer has shown up on the issue tracker about being open to including the changes upstream, maybe that will improve their faith in the open-source community.
Somehow I don't seem to have the same experience as most people in this thread. Sure, homegrown tomatoes and cucumbers taste quite a bit better than what I usually buy at the supermarket, but it certainly isn't perfect vs. inedible.
Possible explanations I can think of:
1. My taste perception is just broken because I am relatively young and have been raised on low-quality produce.
2. My local supermarkets just happen to stock excellent vegetables (I live in Central Europe).
3. There are some extremely high-quality breeds of tomatoes/cucumbers that I have never eaten before.
4. Other people are simply more enthusiastic about vegetable quality and therefore their claims seem exaggerated to me.
First is that some produce really is terrible -- "regular" cheap tomatoes can be utterly flavorless. But on the other hand, a lot of supermarkets stock tomatoes that range from fine to quite excellent, e.g. cherry tomatoes grown in greenhouses sold on the vine.
And second is that people really do exaggerate how great homegrown tomatoes are. There are tomato snobs in the same way there are coffee snobs, whiskey snobs, chocolate snobs, whatever. They insist something is 100x better, when really it's just 1.5x better, because for some reason that's important to them, part of their identity.
Yes, a farmer's-market heirloom tomato is utterly delicious. But store-bought cherry tomatoes on the vine are also super super tasty. Even the ones not on the vine can be really really good. (You can also find really bad ones though, it depends on the store.) I'd go so far as to say they're just different, neither obviously better than the other.
I currently trust Restic with basically all of my long-term backups, which, according to the author, really isn't a thing I should do.
However, I'm still somewhat confident in my strategy as I backup my all of my data to two entirely different repositories, one of them backed by Google Cloud, and another by the server sitting in my pantry. So one of these repositories could get irrecoverably corrupted and I still wouldn't lose any of my data. With cloud storage becoming so cheap I've also thought about adding a third repo.
Of course this would not protect me from a hypothetical bug in Restic that corrupts all my repositories before I notice, so maybe I should also add another auto-backup solution into the mix.
Doing manual things like moving data to external storage seems like a robust strategy, but I really don't trust myself to do something like that nearly often enough for it to be useful.
I have used restic for quite a while. Once in a while, I test that I can restore my backups. That's an important step that lots of people miss.
I had a client that asked me to setup their system. I setup the system, they got a tape drive and I had them rotate tapes daily. There was a cronjob to tar everything to tapes.
It was great until their hard drive failed and I found I had a typo where it was only backing up the current folder, not the whole drive. Needless to say, they found a new IT provider and I learned an important lesson. If you haven't tested your backups, you have no backups.
The interface uses ringbuffers for communication between the kernel and userspace. That's how I have always understood it at least, not sure if it's actually correct.
Just as another data point, I am from Europe and my application was accepted very quickly. Im currently using Hetzner for most of my personal cloud stuff and have been very happy with their services thus far.
I've spent quite some time on building a fully featured MOBI library[1]. A bit sad that my work will become obsolete this quickly, but definitely good in the long run.
[1]: https://github.com/leotaku/mobi