Hacker Newsnew | past | comments | ask | show | jobs | submit | cubefox's commentslogin

One reason is that not all these websites manage to make equally "creepy" links, even though the basic idea is the same. I remember one version which was a lot more alarming than the current example, with links containing a mix of suspicious content hinting at viruses, phishing, piracy/warez sites, pornography (XXX cams), and Bitcoin scams. I don't remember that website, but the current case seems rather weak by comparison.

That makes it even more confusing. If you’re making something creepy, I can see the argument for “whatever exists isn’t creepy enough, I’ll do it better” but not the reverse.

It's possible the current website is older, or that the creator doesn't know about better alternatives. (Also, they do produce rather short links, unlike some of the others, which don't pass as "URL shorteners". Though not sure whether that's relevant.)

That's just their nature: they are very inexpensive to make. The original question was whether people find them fun and engaging. Clearly they did in the past. Though nowadays their standards have risen a lot. Even graphical adventure games (like Monkey Island) have long fallen out of favor due to a lack of action elements.

Are there any successful examples of LLM text adventures? Last time I heard someone here said it's hard to develop robust puzzles and interactions, because it's hard to control and predict what the LLM will do in a dialogue setting. E.g. the user can submit reasonable but unintended solutions to a puzzle, which breaks the game.

That's completely false!

The LotR movies were a high water mark, even if they aren't perfect. I wish Peter Jackson made a movie for Children of Húrin, though that's probably not happening for many reasons.

It helps looking at CGI with a known reference point, like human characters. Every flaw is much more visible than in a "cartoon" like Zootopia.

Almost every 3D game uses textured polygons almost everywhere (except sometimes for fog or clouds), so this SDF engine is nice to see.

However, he doesn't mention animations, especially skeletal animations. Those tend to work poorly or not at all without polygons. PS4 Dreams, another SDF engine, also had strong limitations with regards to animation. I hope he can figure something out, though perhaps his game project doesn't need animation anyway.


His SDF probably puts out a depth buffer, so with some effort (shadows might be hard?) you can just mix it with traditional polygons. The same way raytracing and polygons mix in AAA games.

He's using the SDFs to fill a space sort of like Unreal's Nanite virtual geometry. Nanite also doesn't support general animation. They only recently added support for foliage. So you'd use SDF / Nanite for your "infinite detail" / kit-bashing individual pebbles all the way to the horizon, and then draw polygon characters and props on top of that.

In fact I was surprised to see that Nanite flipped from triangle supremacy to using voxels in their new foliage tech. So maybe the two technologies will converge. The guy who did the initial research for Nanite (his talk also cites Dreams ofc) said that voxels weren't practical. But I guess they hit the limits of what they can do with pixel-sized triangles.


I think they do now support skeletal meshes with virtual geometry: https://dev.epicgames.com/documentation/en-us/unreal-engine/...

Though it says "experimental". Unclear what that means in practice.

This also mentions "skinning": https://dev.epicgames.com/documentation/en-us/unreal-engine/... I believe thats just another term for skeletal meshes / "bones".


I'm not super familiar with this area so I don't follow... Why is animation any more difficult? I would think you could attach the basic 3D shapes to a skeleton the same way you would with polygons.

There are lots of reasons you don’t see a lot of SDF skeletal rigging & animation in games. It’s harder because the distance evaluations get much more expensive when you attach a hierarchy of warps and transforms, and there are typically a lot of distance evaluations when doing ray-marching. This project reduces the cost by using a voxel cache, but animated stuff thwarts the caching, so you have to limit the amount of animation. Another reason it’s more difficult to rig & animate SDFs is because you only get a limited set of shapes that have analytic distance functions, or you have primitives and blending and warping that break Lipschitz conditions in your distance field, which is a fancy way of saying it’s easy to break the SDF and there are only limited and expensive ways to fix it. SDFs are much better at representing procedural content than the kind of mesh modeling involved in character animation and rendering.

One possibility, a little backwards maybe, is to produce a discrete SDF from e.g. a mesh, by inserting it in an octree. The caching becomes the SDF itself, basically. This would let rendering be done via the SDF, but other logic could use the mesh (or other spatial data structure).

Or could the engine treat animated objects as traditional meshed objects (both rendering and interactions)? The author says all physics is done with meshes, so such objects could still interact with the game world seemingly easily. I imagine this would be limited to characters and such. I think they would look terrible using interpolation on a fixed grid anyways as a rotation would move the geometry around slightly, making these objects appear "blurry" in motion.


Sampling an implicit function on a grid shifts you to the world of voxel processing, which has its own strengths and weaknesses. Further processing is lossy (like with raster image processing), storage requirements go up, recovering sharp edges is harder...

But isn't this what the author is doing already? That's what I got from the video. SDF is sampled on a sparse grid (only cells that cross the level set 0) and then values are sampled by interpolating on the grid rather than full reevaluation.

This article contains basically no information about the topic mentioned in the headline, just vaguely related chitchat.


I'm pretty sure you can change various file formats without rewriting the entire file and without using "incremental updates".

You can’t insert data into the middle of a file (or remove portions from the middle of a file) without either rewriting it completely, or at least rewriting everything after the insertion point; the latter requires holding everything after the insertion point in memory (or writing it out to another file first, then reading it in and writing it out again).

PDF is designed to not require holding the complete file in memory. (PDF viewers can display PDFs larger than available memory, as long as the currently displayed page and associated metadata fits in memory. Similar for editing.)


While tedious, you can do the rewrite block-wise from the insertion point and only store a an additional block's worth of the rest (or twice as much as you inserted)

ABCDE, to insert 1 after C: store D, overwrite D with 1, store E, overwrite E with D, write E.


No, if you are going to change the structure of a structured document that has been saved to disk, your options are:

1) Rewrite the file to disk 2) Append the new data/metadata to the end of the existing file

I suppose you could pre-pad documents with empty blocks and then go modify those in situ by binary editing the file, but that sounds like a nightmare.


Aren't there file systems that support data structures which allow editing just part of the data, like linked lists?

Yeah there are, Linux supports parameters FALLOC_FL_INSERT_RANGE and FALLOC_FL_COLLAPSE_RANGE for fallocate(2). Like most fancy filesystem features, they are not used by the vast majority of software because it has to run on any filesystem so you'd always need to maintain two implementations (and extensive test cases).

Interesting that after decades of file system history, this is still considered a "fancy feature", considering that editing files is a pretty basic operation for a file system. Though I assume there are reasons why this hasn't become standard long ago.

File systems aren’t databases; they manage flat files, not structured data. You also can’t just insert/remove random amounts of bytes in RAM. The considerations here are actually quite similar, like fragmentation. If you make a hundred small edits to a file, you might end up with the file taking up ten times as much space due to fragmentation, and then you’d need the file system to do some sort of defragmentation pass to rewrite the file more contiguously again.

In addition, it’s generally nontrivial for a program to map changes to an in-memory object structure back to surgical edits of a flat file. It’s much easier to always just serialize the whole thing, or if the file format allows it, appending the serialized changes to the file.


File systems aren't databases, but journaling file systems use journals just like databases. It can theoretically define any granularity for something that might happen to a file to become an irreversible transaction. I suppose that file systems have to remain “general purpose enough” to be useful (otherwise they become part of the specific program or library), and that's why complex features which might become a pitfall for the regular users who expect “just regular files” rarely become the main focus.

But appending changes is a terrible solution, even if it is "much easier" to implement. Not only because it causes data leakage, as in this case, but also because it can strongly inflate the file size. E.g. if you change the header image of a PDF a few times.

Indeed, also userspace-level atomicity is important, so you probably want to save a backup in case power goes out at an unfortunate moment. And since you already need to have a backup, might as well go for a full rewrite + rename combo.

They are fully supported almost everywhere. XFS, ext4, tmpfs, f2fs and a bunch of misc filesystems all support them.

Ext4 support dates as early as Linux 3.15, released in 2014. It is ancient at this point!


What this does on typical extent-based file systems is split the extent of the file at the given location (which means these operations can only be done with cluster granularity) and then insert a third extent. i.e. calling INSERT_RANGE once will give you a file with at least three extents (fragments). This, plus the mkfs-options-dependent alignment requirements, makes it really quite uninteresting for broad use in a similar fashion as O_DIRECT is uninteresting.

Well, better an uninteresting solution than a solution which is actively terrible: appending changes to a PDF, which will inflate its size and cause data leakage.

Look at the C file API which most software is based on, it simply doesn’t allow it. Writing at a given file position just overwrites existing content. There is no way to insert or remove bytes in the middle.

Apart from that, file systems manage storage in larger fixed-size blocks (commonly 4 KB). One block typically links to the next block (if any) of the same file, but that’s about the extent of it.


DD should.

No. Well yes. On mainframes.

This is why “table of contents at the end” is such an exceedingly common design choice.


This was 1996. A typical computer had tens of megabytes of memory with throughput a fraction of what we have today. Appending an element instead of reading, parsing, inserting and validating the entire document is a better solution in so many ways. That people doing redactions don't understand the technology is a separate problem. The context matters.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: