Hacker Newsnew | past | comments | ask | show | jobs | submit | joeljoseph_'s commentslogin

[Experimental Project]

I use both OpenAI Codex and Claude Code daily on the same codebase. The biggest pain — they don't share memory. Claude fixes a bug, Codex repeats it next session. Every session starts from zero.

So I built Open Timeline Engine — a local-first engine that captures your decisions, mines patterns, and gives any MCP agent shared memory.

It also support gated autonomy


Great point, and yes, that’s exactly where this is heading.

policy driven persistence and compaction is planned, and parts of the lifecycle are already live.


[Experimental Project]

I use both OpenAI Codex and Claude Code daily on the same codebase. The biggest pain — they don't share memory. Claude fixes a bug, Codex repeats it next session. Every session starts from zero.

So I built Open Timeline Engine — a local-first engine that captures your decisions, mines patterns, and gives any MCP agent shared memory.

It also support gated autonomy


[Experimental Project]

I use both OpenAI Codex and Claude Code daily on the same codebase. The biggest pain — they don't share memory. Claude fixes a bug, Codex repeats it next session. Every session starts from zero.

So I built Open Timeline Engine — a local-first engine that captures your decisions, mines patterns, and gives any MCP agent shared memory.

It also support gated autonomy


Interesting approach. How do you handle long-term drift or incorrect pattern capture?

If the system mines behavioral patterns from decisions, I imagine there’s a risk of reinforcing mistakes over time. Do you have a mechanism for pruning, versioning, or validating learned memory before it propagates across agents?


yeah that's a real concern. Few things I do:

Patterns aren't blindly trusted — they carry confidence scores and need real evidence before they go active. Low-confidence stuff never drives autonomous execution.

If something wrong gets learned, you can deprecate it, reject it, or hard-delete it. There's also explicit "avoid" rules you can set.

Old patterns naturally fade — retrieval is recency-weighted, so stuff that isn't reinforced drops in rank over time. There's also a lifecycle cleanup that prunes stale records.

And even with all that, safety gates still apply. The system won't act on weak evidence — below a confidence threshold it just asks you instead of guessing.

so drift is real, but it's bounded by decay + pruning + manual overrides. If you keep making the same mistake though, yeah, it'll learn that too. That's the honest tradeoff.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: