Hacker Newsnew | past | comments | ask | show | jobs | submit | mycall's commentslogin

When your watch supports the most common computer language on Earth, it is hard to go wrong with it. 4 week standby is pretty good too.

There is some papers [0] showing that the skill and agent files reduce the reasoning effectiveness in some use cases (e.g. autogenerated)

[0] https://arxiv.org/abs/2602.11988

reference: https://news.ycombinator.com/item?id=47034087


TDD is a great way to start the plan, stubbing things it needs to achieve with E2E tests being the most important. You still need to read through them so it won't cheat, but the codebase will be much better off with them than without them.

* MOV x86: using memory mapped lookup tables, you can simulate logic gates and branching using only MOV.

* PowerPoint (Without Macros): using On-Click Animations and Hyperlinks, shapes on slides act as the tape and clicking them triggers animations that move the head or change the state of the slide.

* find and mkdir (Linux Commands): find has a -execdir flag executes commands for directories it finds. By using mkdir to create specific folder structures, you can create a feedback loop that functions as a Tag System (aka universal computation).

* Soldier Crabs: Researchers showed that swarms of Mictyris guinotae can be funneled through gates to implement Boolean logic. While a full computer hasn't been built with them, the logic gates (AND, OR, NOT) are the building blocks for one.

Even water is Turing Complete:

* Fluidic Logic Gates: the Coandă effect is the tendency of a fluid jet to stay attached to a convex surface. By using tiny air or water jets to push a main stream from one channel to another, you can create the fluid equivalent of a transistor.

* MONIAC (Monetary National Income Analogue Computer)

* Navier-Stokes equations describe how fluids move are TC.

* In 2015, Stanford researchers developed a computer that operates using the physics of moving water droplets. Tiny iron-infused water droplets moved by magnetic fields through a maze of tracks. The presence or absence of a droplet represents a 1 or a 0. By colliding or diverting each other, the droplets interact perform calculations.


So basically you are arguing a Type Theory vs Set Theory problem, Foundationalism or Engineering Refinement. Since we read here of multiple use cases for LLMs in both CS divides, we can conclude an eventual convergence in these given approaches; and if not that, some formal principles should emerge of when to use what.

This discussion started already in the sixties (see e.g. the 1969 publication by McCarthy and Hayes where they describe the "frame problem" as a fundamental obstacle to the attempt to model the dynamic world using First-Order Logic and monotonic reasoning). A popular attempt to "solve" this problem is the Cyc project. Monotonic logic is universally understood as a special, restricted case (a subset) of a broader non-monotonic theory.

I'm familiar with Cyc but never considered it a monotonic reasoning, but it definitely makes sense in retrospect. It appears Lean Machines [0] is a step head, combining both sides of the frame problem as a specific, although it likely leans towards leans (pun intended).

[0] https://github.com/lean-machines-central/lean-machines


Thanks for the hint. The "LeanMachines" project literally seems to recreate Event-B constructs (contexts, machines, events, and refinement proof obligations) inside the Lean 4 proof assistant (using Lean 4 as a "host language").

Both type and set theory are formal logic, I don't see how that's what being argued. Rather that there are some things that are formal-logicy (e.g. set theory) and many other things that are not (like e.g. biology, you'll always find some weird organism breaking your assumptions).

Could you put that distinction into the AGENTS.md file so it will understand and follow that nuance?

I have several techniques queued up that attempt to counter it. The distinction in Agents.md is definitely part of it.

Not sure if they will work yet.


> Imagine the possibilities

1.5B models are not very bright which doesn't give me much hope for what they could "claw" or accomplish.


A 1.5b can be very good at a domain specific task like an entity extraction. An openrouter which routes to highly specialised LMs could be successful but yeah not seen it in reality myself

Perhaps you just haven't used the correct AI yet? Perhaps none of us have in that Forth doesn't have much of a large dataset to train from?

Can you link to the programming challenge? It would be interesting to see if recursive language models that use double-blind latent space might work better.



> Perhaps none of us have in that Forth doesn't have much of a large dataset to train from?

Well, being terse as heck is the point of Forth so of course the dataset isn't large /j.

More seriously, I think the bigger issue is that Forth isn't exactly uniform. It is so moldable that everyone has their own style


Q4_K_S @ 116 GB

IQ4_NL @112 GB

Q4_0 @ 113 GB

Which of these would be technically better?

[1] https://huggingface.co/bartowski/stepfun-ai_Step-3.5-Flash-G...


of those, Q4_K_S is better

I had to do MTU tuning on macos on the ZeroTier interface (find your feth name via ifconfig)

# Replace feth1234/feth2345 with your active interface

sudo ifconfig feth1234 mtu 1400

sudo ifconfig feth2345 mtu 1400

..and for working with Windows peers, manually "Orbit" the Windows Peer as well as adding a direct routing hint for the internal ZeroTier IP. ZT definitely takes some effort for tuning.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: