Exactly. It's even how I taught myself extremely basic Pascal -- getting my BASIC Life program running in Pascal. With asterisks.
A taught a friend at uni, who was a much better programmer than me, how the algorithm worked. He did a pixel-by-pixel version in machine code, but it was a bit slow on a ZX Spectrum.
So he did exactly the quarter-character-cell version you describe. I wrote the editor in BASIC, and he wrote a machine-code routine that kicked in when told and ran the generations. For extra fun he emitted some of the intermediate state to the border, so the border flashed stripes of colour as it calculated, so you could see it "thinking". Handy for static patterns -- you could see it hadn't crashed.
I've been considering doing a quarter-cell Mandelbrot for about 30Y now. Never got round to it yet.
> If you do get it to work it silently adds extra broken repos which make it impossible to install packages.
This is not true.
I use it extensively and have done for more than 4 years now. It adds nothing at all to the installed OS. It doesn't care about the installed OS: I have successfully installed Linux, FreeBSD, Windows, even FreeDOS from Ventoy.
It comes up regularly in OpenSUSE communities, it silently breaks the installed OS by hooking in to the init process of the ISO and fucking with it for no good reason:
You have misunderstood what these reports are saying.
You claimed "Ventoy adds repos". It does not. It is incapable of doing anything of the kind. It does not run on the installed system. It does not modify the boot media in any way. This is demonstrable and verifiable.
When booted from Ventoy, openSUSE apparently adds the installation media as a repository.
This is not some disaster or horrible hack. This is normal behaviour for Debian, for example.
That means that something in the openSUSE installer is misinterpreting boot parameters.
This is a openSUSE bug, not a Ventoy bug.
SUSE, though, has an institutional habit of blaming problems on others, or denying that problems exist. I know this for a fact from my own personal direct experience: I worked at SUSE from 2017 to 2021.
You are misreading bug reports, wrongly deducing things that did not happen, and mis-attributing blame. The fault here is yours, and secondarily SUSE's. It is not Ventoy's.
> You claimed "Ventoy adds repos". It does not. It is incapable of doing anything of the kind. It does not run on the installed system. It does not modify the boot media in any way. This is demonstrable and verifiable.
It adds a parameter to the kernel boot line. That is not adding a repository. It is not doing what you claim it does.
I am not putting any pressure on you. If you don't want to use it, then don't.
I find it hugely useful, have been using it for about 10 years now on dozens of machines and hundreds of distros and OSes, and it's saved me not just hours but days and weeks of work, effort, and time wasted writing files to USB keys.
All I am asking you to do is not tell lies about it.
Fine, the repository gets added because ventoy hijacks the boot process and messes with it, it does not directly add it. The problem is still the same and it would still be a problem even if it didn't break anything: It should not be hijacking the boot process, there's absolutely no good reason for it.
Tumbleweed is a conventional distro. You're root, you can do whatever you want, you have full R/W access to the entire FS, and updating is by installing lots and lots of packages into the live OS while it is running.
Aeon and Kalpa are immutable: the root fs is largely R/O even to root, and you cannot install or update packages on the running system. To install packages into the OS itself you must reboot, and installation is transactional -- it can automatically undo changes that prevented a successful boot.
> Would the A/B filesystem approach à la Android be a good way to distribute Linux with ZFS-on-root without all the angst from DKMS modules versioning?
This is exactly what Valve's Steam OS 3 does. (Except it uses Btrfs for the two root partitions, not ZFS.)
I think the fact that it's chunks of exist and work plugged together is what makes it so impressive to me in the sense that it means that we're actually building somewhat reusable modular architecture for doing this that people can actually repeat and build on separately and so on. And thanks for the future, I think. Also, I have to say, excellent article.
There is a level where the criticisms of the neuroscientists are both entirely legitimate and at the same time probably not really valid.
Again, to go to an SF reference, the Australian hard-SF writer and mathematician Greg Egan has gone into this at some depth. I can't call which story to mind now, but he imagined a scenario in one of his where the tech is available to do a full whole-body in-silico emulation of a human: every dendrite of every brain cell, every action potential propagating along it, every neurotransmitter diffusing across every synapse.
And what the people running the simulations AND THE PEOPLE BEING SIMULATED discover is that you don't need it most of the time.
In the story, doing the neocortex of the brain and a coarser sim of the underlying structures is enough to support full consciousness. For the peripheral nervous system, an even lower-level sim is enough: your limbs still feel right. For the sympathetic nervous system, you don't bother at all -- just simulate overall excitation levels.
Don't bother doing whole muscles, as most people aren't consciously aware of them anyway, even when running or doing sport -- any more than we're aware of breathing.
You downgrade the sim of everything except the important bits to a coarse low-res approximation and most of the time you wouldn't be able to tell -- but it's much faster and takes much less CPU time, so the same compute substrate can simulate more people faster for a given amount of resources.
No, this is not a full simulation of all the nerve cells in a fly brain, but it seems like it does more or less what a fly would do anyway. It seems quite possible that for a fly, a coarse low-level generalised simulation might be enough to produce something that walks like a fly, feeds like a fly, and maybe flies like a fly and breeds like a fly.
I don't think a fly "knows" which leg to move when walking. I suspect a horse doesn't. I barely do, and I only have 2 of them.
Crude abstracted low-level sims with the right structure might be enough to get the desired behaviour and have a model that's good enough that its behaviour is indistinguishable from the original.
In human terms... if we ever get to the point that we can simulate a brain in a jar, if the sim "lies" to the brain and tells it a body is there, and is doing the normal body stuff and walking and swinging its arms and whatever, that might be enough to "feel real" to the mind in the simulated body.
Do it for an athlete or sports player or a gymnast or a bodybuilder and they'd notice. They'd know. But most of us never would.
You can't feel your individual toes unless you stub one. So don't simulate them.
A low-res fly brain model connected to a lower-res overall ventral nerve cord connected to a trivial fly muscle body which doesn't even simulate individual muscles might be enough that the fly does everything a fly can do.
I think there's another layer to this too, which is that — if you've already scanned and uploaded a person's mind, the original person is presumably dead, or at least a completely separate person from the one in the simulation. Moreover, really, no matter how much versimilitude you pour into your simulation, it's never really going to be exactly the same. Hence, if we're being clear-eyed about this, presumably the point of doing a brain upload like that isn't to get immortality for a particular person or something, its just to get a much more advanced and human-like form of artificial intelligence.
At which point, if we don't need a perfectly accurate neurological and physical simulation to get something that mostly "walks," talks and acts like a human, or something very similar, why would we bother doing it at all?
And yeah, although you could obviously assume that perfect simulation of all of the physical and chemical and neurological processes is necessary to get something to even slightly functions, considering the fact that people can function with 90% of their brain entirely gone, I'm not sure how accurate that is.
And yet I'm not exactly the same person as the one who hadn't yet read your comment, nor the same as the one an hour ago, not the one that woke up this morning.
Maybe exactly the same isn't the right metric after all.
This is true. But I'm talking about fundamental substrate and the rules by which it operates, not just the flow of time — that's probably both a qualitative and quantitative difference. Additionally, even if it's not about being the exact same, although you could view brain uploading as a continuation of the species, in some sense, then — you still couldn't view it as a continuation of the same individual... because that guy/gal is right there!
It might well be, yes -- it's a long time now, I have read all his published novels except maybe in the last couple of years, and I am not sure which one is which.
> and while I am not a neuroscientist, I tried to explain this is just a combination of chunks of existing work
Which most complex technology is these days, even the most impactful. Docker is a fancy wrapper around Linux cgroups, Kubernetes combines that with etcd, and stuff like AWS EKS combines that with hypervisors (which I think were based on Linux KVM for a long time).
reply