Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thinking aloud, and this is probably a bad idea for reasons I haven’t thought of.

What if pointers were a combination of values, like a 32 bit “zone” plus a 32 bit “offset” (where 32/32 is probably really 28/36 or something that allows >4GB allocations, but let’s figure that out later). Then each malloc() could increment the zone number, or pick an unused one randomly, so that there’s enormous space between consecutive allocs and an address wouldn’t be reissued quickly. A dangling pointer would the point at an address that isn’t mapped at all until possibly 2^32 malloc()s later. It wouldn’t help with long-lived dangling pointers, but would catch accessing a pointer right after it was freed.

I guess, more generally, why are addresses reused before they absolutely must be?



It sounds like what you're describing is one-time allocation, and I think it's a good idea. There is some work on making practical allocators that work this way [1]. For long-running programs, the allocator will run out of virtual address space and then you need something to resolve that -- either you do some form of garbage collection or you compromise on safety and just start reusing memory. This also doesn't address spatial safety.

[1]: https://www.usenix.org/system/files/sec21summer_wickman.pdf


> For long-running programs, the allocator will run out of virtual address space and then you need something to resolve that -- either you do some form of garbage collection or you compromise on safety and just start reusing memory

Or you destroy the current process after you marshall the data that should survive into a newly forked process. Side benefit: this means you get live upgrade support for free, because what is a live upgrade but migrating state to a new process with updated code?


Oh, nifty! I guarantee you anyone else discussing this has put more than my 5 minutes' worth of thought into it.

Yeah, if you allow reuse then it wouldn't be a guarantee. I think it'd be closer to the effects of ASLR, where it's still possible to accidentally still break things, just vastly less likely.


That’s a way of achieving safety that has so many costs:

- physical fragmentation (you won’t be able to put two live objects into the same page)

- virtual fragmentation (there’s kernel memory cost to having huge reservations)

- 32 bit size limit

Fil-C achieves safety without any of those compromises.


For sure. I'm under no illusion that it wouldn't be costly. What I'm trying to suss out is whether libc could hypothetically change to give better safety to existing compiled binaries.


Two things:

- The costs of your solution really are prohibitive. Lots of stuff just won't run.

- "Better" isn't good enough because attackers are good at finding the loopholes. You need a guarantee.


This sounds similar to the 386 segmented memory model: https://en.wikipedia.org/wiki/X86_memory_segmentation#80386_...

However, it was limited to 8192 simultaneous “allocations” (segments) per process (or per whatever unit the OS associates the local descriptor tables with).


you can do this easily with virtual memory, and IIRC Zig's general purpose allocator does under some circumstances (don't remember if its default or if it needs a flag).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: