I have used perkeep. I still do at least in theory. I love the concept of it but it’s become… not quite abandonware, but it never gained enough traction to really take on a full life of its own before the primary author moved on. A bit of a tragedy because the basic idea is pretty compelling.
I evaluated it for a home server a few years ago and yeah— compelling in concept, but a system like this lives or dies by the quality of its integrations to other systems, the ability to automatically ingest photos and notes from your phone, or documents from your computer, or your tax returns from Dropbox.
A permanent private data store needs to have straightforward ways to get that data into it, and then search and consume it again once there.
I've been similarly half-interested in it for... more than a decade now. The new release (which is what I assume prompted this post) looks pretty impressive (https://github.com/perkeep/perkeep/releases/tag/v0.12).
Why would this need to work with Tailscale? It just needs to be running on a machine in your tailnet to be accessible, what other integration is necessary?
I'm a co-author of tsidp, btw. You don't need tsidp with a Tailscale-native app: you already know the identity of the peer. tsidp is useful for bridging from Tailscale auth to something that's unaware of Tailscale.
I use `tsnet` and `tsidp` heavily to safely expose a bunch of services to my client devices, they've been instrumental for my little self-hosted cloud of services. Thanks for building `tsidp` (and Perkeep!) :).
I'm on the same boat. It's well designed, works great, and I really can't get it out of my head as a well-engineered project and great idea.
But it really is nearly abandoned, and outside of the happy-path the primary author uses it for, it's desolate. There is no community around growing its usage, and pull requests have sat around for months before the maintainer replies. Which is fine if that's what the author wants (he's quite busy!), but disappointing to potential adopters. I've looked at using it, but with data types that sit outside the author's use case, and you'd really need to fork it and change code all over the repo to effectively use it. It just never hit the ideal of "store everything" it promises when it has hard-coded data types for indexing and system support.
(and yes, I did look at forking it and creating my own indexer, but some things just aren't meant to be)
That not really a surprise, the website and documentation is awful, not really selling the project well. I also get the impression there is not really customization possible, no integration of external stuff, just a monolithic blob, doing something. This kind of software can't succeed easily without an open architecture, or a proper selling documentation of how to utilize it for your own demand.
They're supposed to be released today for everyone, and o3-pro for Pro users in a few weeks:
"ChatGPT Plus, Pro, and Team users will see o3, o4-mini, and o4-mini-high in the model selector starting today, replacing o1, o3‑mini, and o3‑mini‑high."
They are all now available on the Pro plan. Y'all really ought to have a little bit more grace to wait 30 minutes after the announcement for the rollout.
They'd probably want their announcement to be the one the press picks up instead of a tweet or reddit post saying "Did anyone else notice the new ChatGPT model?"
Anyone doubting this really has no idea what they're talking about.
Set up a "Health & Fitness" project in Claude (or whatever). Feed it:
* Basic data: height, weight, age, sex
* Basic metric snapshots from Apple Health or whatever: HRV range, RHR, typical sleep structure - go through everything and summarize it
* Typical diet (do you track it in MFP or Cronometer? Great, upload a nutrition report)
* Any supplements and medications you take
* Typical exercise habits
* Any health records you have - bloodwork results, interpreted imaging results, etc.
* Family history like you would describe it to a doctor
* Summary of any health complaints
* Anything else that seems relevant.
Then go through a few conversation loops asking it if there's any more information you could provide that would help it be more useful.
Then ask it things like "Given <health complaint>, what should I be doing more of? Less of?"—or "Please speculate about potential causes of <thing>".
Or, even if you don't have any particular health complaints you're working with, just being able to ask it questions like "What's one supplement I should consider starting or stopping today?" (and then obviously do some follow-up research...)
This is life-changing. Anyone skeptical of this has not tried it.
Troubleshooting some chronic inflammation issues (plausibly, like OP, an autoimmune issue)
It's suggested a few supplements that have helped a lot, helped me figure out dosing and timing, pointed me towards taking gut inflammation more seriously as a part of what's going on (and suggesting various tests and experiments to help prove/disprove that), explained correlations in various bloodwork results, the list goes on.
It's—of course—not perfectly trustworthy but a lot of things are either trivially verifiable or are low-risk experiments.
DC has a staggering density of restricted airspace; Reagan National has unusually tight approach/departure requirements... so it doesn't surprise me that if this was going to happen somewhere, it would be there.
As someone working in developer tools for a company with thousands of people developing software on MacBooks, MAN do I resent SIP. I've recently started calling it "Systems Implementation Prevention".
It's incredible that it's 2024 and I can't cobble together anything vaguely container-like on macOS because:
* bind mounts don't exist (?!)
* clonefile() could maaaybe do the job but doesn't work cross-volume and a lot of the stuff outside of /Users is a different volume
* there's no filesystem namespace.
* chroot doesn't work either because /usr/lib/libsystem.B.dylib is required, but also pretend.
* And it sounds like chroot runs afoul of some SIP rule nowadays even if you can get past the above.
* A lot of this could be worked around with FUSE, but in order to turn that on, we'd have to turn off a lot of SIP.
The closest we can get without virtualization is sandbox-exec, which just allows allowing/denying file reads by path, with no path translation. And also is deprecated.
Nevermind that dtrace exists but you're not allowed to use it either.
Interesting, I hadn't heard of this. First impression skimming the docs is that they've gone to significant trouble to make it not generically useful as a FUSE replacement but I could be misreading.
Not macOS directly, but there’s fuse-t which works in userspace and just creates an NFS server which it automatically mounts via macOS-own capabilities.
The library is a drop-in replacement for libfuse and works great for me.
It's very heavyweight, and there's no good shared filesystem option.
We did use virtualization for a bunch of stuff before the move to Apple Silicon, back when Hypervisor.framework and xhyve actually existed and were plausibly useful.
Those also fell by the wayside in the architecture migration and now virtualization has a massive performance cost.
Apparently the M4 chips are on ARMv9 which is apparently much better at virtualization, but it remains to be seen whether apple provides anything lightweight again.
Filesystem perf is definitely an issue, but cpu wise virtualization perf is basically free on Apple Silicon. I don’t know why you think Hypervisor.framework went away or became useless in the architecture migration. Obviously x86 VMs are slow, but we’ve been using arm64 VMs for years now with great results.
Yep. However, before the Apple Silicon migration, VT-x gave us extremely low-overhead virtualization. We built a tiny linux kernel that booted in a second or two and were able to run whatever we wanted with minimal perf overhead.
In the Apple Silicon migration, obviously emulating x86_64 got slow, but even when we built ARM64 VMs, performance was still miserable: there was (is?) no way -- at least no way we ever figured out -- to get reasonable perf out of virtualization on a macbook.
It's possible that this changed post-M1 and it sounds likely it's set to change with M4.
EDIT: ok, I'm probably hallucinating more problem than there actually turned out to be based on the pain in the first year of the M1 chips.
If you are referring to the nested virtualisation support in ARM v8, it was added in the ARM v8.3-A revision of the architecture, and M1 uses ARM v8.5-A as the baseline.
But yes, virtualisaiton support for ARM (in general) was abysmal and Apple Silicon was the catalyst that pushed people over the edge towards improving it across aarch64 (also in general).
I guess your dainty, utopian senses are irrationally offended by something that works. Some types of virtualization offer hard isolation guarantees while cgroups, chroot, jails and the like provide pretend isolation lacking hard guarantees about either security or resource limits. KVM is tiny and so is Virtualization.framework. If you want perfect "containers", you're not going to find them anywhere because they try solve a problem (convenience, speed, and isolation) at the wrong level, in the wrong way. Type 1 Xen and VMware are the gold standards supporting all sorts of deduplication, replication, and migration options that containers can't touch. Type 2 Kata Containers is another option out there with stronger guarantees with the same interface as CRI. If these don't work for you, write a better solution that can divvy disk IOPS and latency, process manipulation, memory shares and bandwidth, network bandwidth and priority, and VFS fairly while sandboxing misbehaving processes from taking down other containers on the same host. I submit that these are essentially impossible goals with the architecture of Linux, which is why variants of virtualization providing paravirtualized guests is generally superior in providing service guarantees because there is out-of-band management exterior to fallible, DoSable containers.
Lithium ion batteries in devices are sandwiched layers enclosed in a kind of 'pouch', right? So what if you manufactured one of these that looked identical to the normal battery, but only had half a battery inside, and the rest of it was plastic explosive. Maybe put a tiny chip in there that, when a particular pattern of current draw happens, fires a detonator. Then, some firmware hack in the device proper that responds to some event and actuates that current draw pattern. It wouldn't even look suspicious if you opened it up.
That's an interesting idea, and it wouldn't even need a firmware hack... a real time clock circuit with a specific date/time to detonate would be simpler and easier to coordinate simultaneous detonation.
parent knows what they’re doing here but a helpful thing to know is that you generally want at least 3-5% salt by weight for safety (though less can be okay, depending on other factors)… but if you’re not building in a margin of safety, you need to weigh the peppers and water together to calculate 5% against, not just the water.
https://github.com/burke/helix/pull/1