Hacker Newsnew | past | comments | ask | show | jobs | submit | guld's commentslogin

TLDR of the video:

Complex self-replicating programs emerging spontaneously from random noise byte strings with:

- zero mutation

- a sharp phase transition that looks like gelation

- and a proof that blocking deep symbiogenetic ancestry trees prevents the transition entirely

- written in a variant of BrainFuck that can read and write its own code called BFF.


Interesting. Can anyone provide personal insights or benchmarks on how effective TOON compared to e.g., JSON or Markdown is (Codex, Claude, ...)?


Ideas like this are bad ones. Words matter, you should put effort into them, minimization is not the primary optimization, don't let something like this MitM and change your hard work for the worse.

The reason people do custom is to craft very good instructions and tools, something a machine is not capable of


Perhaps? I just used it to analyze one of my 96k Zig codebases using Claude Code and here is (part of) what came back. (I snipped out the deeper analysis above as it exposes my private project - but it was all correct).

  Head-to-Head

  ┌──────────────┬─────────┬─────────────┬────────────┐
  │    Metric    │  Opty   │ Traditional │   Ratio    │
  ├──────────────┼─────────┼─────────────┼────────────┤
  │ Input tokens │ ~13,500 │ ~39,408     │ 2.9x fewer │
  ├──────────────┼─────────┼─────────────┼────────────┤
  │ Tool calls   │ 21      │ 61          │ 2.9x fewer │
  ├──────────────┼─────────┼─────────────┼────────────┤
  │ Round trips  │ 5       │ 9           │ 1.8x fewer │
  └──────────────┴─────────┴─────────────┴────────────┘
I had it run a separate analysis using traditional vs. opty and count the actual tool calls and input token counts. My prompt was basically, "do a full analysis of this entire codebase."


you're focused on quantity, that's yesterday's problem, tokens are getting cheaper, contexts are getting longer

try quality instead


Well actually there is ROE.md, no code, just a Markdown file to generate a claw.


The code is always generated using the latest LLM, ensuring that it takes advantage of the latest architectures and programming language features.


You joke but thats the accelerationists' dream


Why not just add like a list 100 captchas, that have to be solved in a very short time like <100ms to pass?



RIP. I loved PSO on the Dreamcast, sank alot of hours into that game back then... Anyone here remembers that? And the Tamagotchi-esque memory cards (VMU) were cool.


I totally get what you are feeling, I felt the same just 6 weeks ago.

It just did not click for you yet.

There is probably some key feature missing, that you deeply care about, but do not yet see it solved, or on the horizon of becoming solved by the application of a personal "Jarvis" yet.

Personal assistants fulfill different needs for everyone. I personally care a lot about having fun at coding again, that's what the OpenClaw craze made me feel for the first time in decades. I build my own OpenClaw assistant generator from scratch using a simple Markdown file because it is just so fun. Not so much using it for anything notably yet but starting to see their potential.

Just ponder what it is that you get out of using ChatGPT and imagine how it could be better, more personal to you. You may find some key feature missing from OpenClaw or have some completely orthogonal project idea that excites you.


This comment looks like a company's PR :/


I am a solo founder currently (re-)starting multiple FOSS projects. Not much to show for yet, because most of my projects are down as I am starting fresh...

- Tool.io (https://tool.io), a digital tool library, think Wikipedia but for tools. Still rewriting it, publishing to Github soon.

- LayerGolem (https://layergolem.com), an OpenClaw like agent for business.

- animania.info (https://animania.info), this one got content actually. Some ~50 hand written MIT licensed CSS animations by me that I created for fun, in the pre-LLM era.

- I am also in the midts of creating a "learn programming from scratch Youtube channel".

- ROE.md (https://github.com/guld/ROE.md), raise your own personal AI assistant like a vibe pro from a single Markdown file.

Happy building everyone!



Why did they have to tweak sampling parameters so much for the benchmarks? Looks like rerun hacking.


Let's hope they release it to huggingface soon.

I tried their keyboard switch demo prompt and adapted it to create a 2D Webgl-less version to use CSS, SVG and it seem to work nicely, it thinks for a very long time however. https://chat.z.ai/c/ff035b96-5093-4408-9231-d5ef8dab7261

[1] https://huggingface.co/zai-org


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: