Hacker Newsnew | past | comments | ask | show | jobs | submit | karel-3d's commentslogin

nobody will immediately jump on your code review

Sorry that I am too stupid to understand what Moment is.

It is a collaborative markdown file that also renders very fast. So far so good.

And then... it somehow adds Javascript? And React? And somehow AI is involved? I truly don't understand what it is, and I am (I think) the end customer...

edit: I tried it and I just get "Loading..." forever. So, anyway, next time.


Hey karel-3d, I'm one of the engineers working on Moment and would love to help figure out the issue you're running into. Would you mind reaching out via our Discord or email (trey@moment.dev)?

I would like to know if you plan to open source anything, and how much. https://github.com/orgs/moment-eng/ looks a bit empty

OK I will be happy to help. I didn't mean to be dismissive! Will ping you tomorrow

Well JavaScript was supposed to be a glue between browsers and Java Applets.

> It's me, hi,

> I'm the problem, it's me

- Taylor Swift, 2022


"Well of course I know him, he is me" (Obi-Wan Kenobi, 0 BBY)

How is Ceno making sure someone is not poisoning the cache?

edit: I try to read the paper and it's just referencing some RFC, which is not making me smart at all.

Again, how am I sure that when I am reading something from the cache, it's really serving what the site was serving somewhere else, and the person saving it there didn't modify it? Is it signed by the original page SSL cert?

edit2: ahh the "injector server", which is run by Ceno, retrieves the page and signs it. So you are moving the trust to Ceno and the central Ceno server actually does the browsing...? So the injectors can just see all the traffic? But that's inevitable I guess, someone needs to see the traffic


Yesterday I stopped hating AI because it converted an old webpack project with impenetrable plugin settings to a single simple Vite config.

I still don't understand how people used to think scripts like this are the proper way to bundle an app.

https://github.com/facebook/create-react-app/blob/main/packa...

vite is great, is all I am saying


800 lines config to compile code that's later interpreted is wild. I get the general idea behind having a script instead of a static config, so you can do some runtime config (whether or not we should have runtime changes to config is a different conversation), but this is absurd.

I'm a big believer in fully reviewing all LLM generated code, but if I had to generate and review a webpack config like this, my eyes would gloss over...


No no no, the script on the link was BEFORE llms. That was how it used to be done before. That was the recommended facebook way.

The LLM generated vite config is 20 lines


Oh yeah, I got that - my comment is a bit confusing reading it back. The fact we used to built trash like that blows my mind. Makes me content having been on the backend.

People fought to replace the tools of the era with this. It had some advantages over time - ES6, a good plugin ecosystem, react adoption - but quickly it just became "the standard" which everyone is afraid to question.

I used to maintain a build workflow library [1] a lifetime ago; while our frontend build needs have evolved way beyond it, I can't avoid the feeling that we overengineered a little too much.

[1] https://github.com/ricardobeat/cake-flour


Thank you Donald Trump for reducing our dependency on fossil fuels!

It's a satire. The authors presented it at FOSDEM. They are people that worked previously for foss communities.

Satire is too dangerous to be presented outside of its community. This honestly should've been left within FOSDEM.

It's great within the context of people who understand it, enlightening even. Sparks conversations and debates. But outside of it ignorance wields it like a bludgeon and dangerous to everyone around them. Look at all the satirical media around fascism, if you knew to criticize you could laugh, but for fascists it's a call to arms.


No one who understands the first thing about this topic could possibly have read that web page and not realized that it was satire.

"Those maintainers worked for free—why should they get credit?"

"Your shareholders didn't invest in your company so you could help strangers."

"For the first time, a way to avoid giving that pesky credit to maintainers."

"Full legal indemnification [...] through our offshore subsidiary in a jurisdiction that doesn't recognize software copyright"


Maybe I’m missing something but big corps do this, right? I legitimately expect folks like Musk and Zuckerberg to say these things. I get why that’s exactly the reason it’s satire but it’s a little too close to the truth for me to chuckle about it.

This is because you're already in that mindset.

Try to take the stance of someone who doesn't really know too much about open source other than it's a nuisance to use, this is a great idea! I wanted to use this tool that corporate said we couldn't touch, but now I can!


If people lack sense of humor or satire, even if pathologically, well, too bad for them. Why should the rest be denied of that satire? It's not harming anyone at all.

Unfortunately it's not too bad for them, it's too bad for everyone they're around. They aren't the ones that lose out when we start dismantling open source communities.

PP's point is that 2025-2026 is exactly the result of satire being weaponized to cause real harm, because people pretend it's truth.

That wasn’t people weaponizing satire, that was people just making weapons

There is an overlay of smeared poop on one of the license files… is that something you are seeing on typical tech company landing pages?

The company is literally named “bad/evil.”


Read the actual article. The AI recommended him 5 things that are all more easily done by UI and are all accomplishing different thing than they say anyway.

I read the article. Parent's comment about automation is spot on. TFA didn't describe any GUI interaction in detail, or even suggest that there was a way to achieve these goals without needing a meatbag to physically interact with the computer (and capture its output in /dev/meatbrain).

But at least TFA wrote up the criticism in text, even transcribing some of the screenshots.


More easily maybe, but the CLI command is deterministic and works as long as the user can successfully paste it to a terminal and run it.

For UI you need to figure out different locales, OS versions, etc.


theshrike79 was talking about automating. Automating using an UI requires you to have a program which can simulate click events and a Display server which allows this. It's also really brittle, because you are not actually depending on the action you want to invoke, but on the UI location this action is exposed at.

Automating terminal commands is easy, because that is how the OS works anyways. All programs invoke each other by issuing (arrays of) strings to the OS and telling it to exec this.


I’ll paraphrase a user named Bear from Usenet a few decades back: if all you know how to do is point at what you want, you’re operating at the level of a preverbal child.

I have no idea what am I reading

> Mycelium structures applications as directed graphs of pure data transformations. Each node (cell) has explicit input/output schemas. Cells are developed and tested in complete isolation, then composed into workflows that are validated at compile time. Routing between cells is determined by dispatch predicates defined at the workflow level — handlers compute data, the graph decides where it goes.

No still don't understand

> Mycelium uses Maestro state machines and Malli contracts to define "The Law of the Graph," providing a high-integrity environment where humans architect and AI agents implement.

Nope, still don't


I'm talking about expressing the application as a state machine and then implementing each step in the state graph as an independent subprogram. The cells accept a state, do some work, and produce a new state. Then the graph orchestrator inspects the state and dispatches to the next appropriate cell.

I have the same problem. The "What It Is" section starts with "Mycelium is a Clojure workflow framework built on Maestro" and that's a bit generic. Maybe something to test some AI generated code and then test if the tests are tested enough using Closure, but I'm not entirely sure.

The main question that is not obvious, is what should I use it for?


I don't understand why the poster (which is the author) links us to a slop report of a test for their library. It would be much more effective to cover part of this info into the README where we get the context of what they want to achieve (where there is a very clear "Why?" section), and then link to it instead. I have flagged it as AI slop.

I don't understand LISP or Clojure, but it seems to be some kind of library for making web services out of LISP, which has some separate components that are somehow well defined. And somehow it's all related to AI.

Again I don't know much about Clojure and I am too slow for functional programming in general.


The whole point of the framework is to see what LLM oriented framework would look like. My argument is that the way code is normally structured is not conducive towards LLMs because context grows in unbounded way, and they end up getting lost.

The whole point of the 'slop' report is to have the LLM try implementing the features using both the traditional approach and the framework and then reflect on how it fared with each approach.


Yeah it reads very Time Cube...

The top-level README gives a bit better idea. Armed with that the explanation might sound a bit more understandable.

I'm not familiar with the project (or Clojure), but let me try to explain!

> Mycelium structures applications as directed graphs of pure data transformations.

There is a graph that describes how the data flows in the system. `fn(x) -> x + 1` in a hypothetical language would be a node that takes in a value and outputs a value. The graph would then arrange that function to be called as a result of a previous node computing the parameter x for it.

> Each node (cell) has explicit input/output schemas.

Input and output of a node must comply to a defined schema, which I presume is checked at runtime, as Clojure is a dynamically typed language. So functions (aka nodes) have input and output types and presumably they should try to be pure. My guess is there should be nodes dedicated for side effects.

> Cells are developed and tested in complete isolation, then composed into workflows that are validated at compile time.

Sounds like they are pure functions. Workflows are validated at compile time, even if the nodes themselves are in Clojure.

> Routing between cells is determined by dispatch predicates defined at the workflow level — handlers compute data, the graph decides where it goes.

When the graph is built, you don't just need to travel all outgoing edges from a node to the next, but you can place predicates on those edges. The aforementioned nodes do not have these predicates, so I suppose suppose the predicates would be their own small pure-ish functions with the same as input data as a node would get, but their output value is only a boolean.

> Mycelium uses Maestro state machines and

Maestro is a Clojure library for Finite State Machines: https://github.com/yogthos/maestro

> Malli contracts

Malli looks like a parsing/data structure specification EDSL for Clojure: https://github.com/metosin/malli

> to define "The Law of the Graph," providing a high-integrity environment where humans architect and AI agents implement.

Well, beats me. I don't know what is "The Law of the Graph" and Internet doesn't seem to know either. I suppose it tries to say how you can from the processing graph to see that given input data to the ingress of the graph you have high confidence that you will get expected data at the final egress from the graph.

I do think these kind of guardrails can be beneficial for AI agents developing code. I feel that for that application some additional level of redundancy can improve code quality, even if the guards are generated by the AI code to begin with.


That's mostly correct, one small correction is that cells don't have to be pure. They just have to focus on doing a single task with some hard boundaries.

And what I meant with the law of the graph was simply that the graph defines the actual business logic, and then each cell is a context free component that can be plugged into it. I guess I was just trying to be clever there.

The key benefit I'm finding is that cells can be reasoned about in isolation because they know nothing about one another. You don't have implicit coupling happening the way you do in normal programs that's embedded in the call graph.

My approach is to use inversion of control where the cell gets some context and resources like a db connection, does some work, and produces the result. That gets passed on to the graph layer which inspects the result, and decides what cell to call next.

With this approach you can develop and tests these cells as if they were completely independent programs. The context stays bounded, and the agent doesn't need to know anything about the rest of the application when working on it.

The cells also become reusable, since you arrange them like Lego pieces, and snap them together into different configurations as needed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: