Was working on this same idea (*working = ideating over). Was really dissapointed to see after downloading the app does nothing without an account. This seems totally non-required for local "free" projects with this tool.
The thesis of "What is Intelligence" is based around intelligence being just that.
> Intelligence is the ability to model, predict, and influence one’s future; it can evolve in relation to other intelligences to create a larger symbiotic intelligence.
The book is worth a read. But I don't believe it limits the type of intelligence we have to humans, by definition. Then again, I'm only halfway through the book :).
It seems obvious to me that "the ability to model, predict, and influence one’s future" is far more general and capable than "constrained to pattern recognition and prediction of text and symbols." How do you conclude that those are the same?
I do like that definition because it seems to capture what's different between LLMs and people even when they come up with the same answers. If you give a person a high school physics question about projectile motion, they'll use a mental model that's a combination of explicit physical principles and algebraic equations. They might talk to themselves or use human language to work through it, but one can point to a clear underlying model (principles, laws, and formulas) that are agnostic to the human language they're using to work through them.
I realize some people believe (and it could be) that ultimately it really is the same process. Either the LLM does have such a model encoded implicitly in all those numbers or human thought using those principles and formulas is the same kind of statistical walk that the LLM is doing. At the very least, that seems far from clear. This seems reflected in the results like the OP's.
> Likewise, a METR report found that AI coding tools, which are meant to be the most promising application for generative AI, actually slow developers down. Both studies cited the same issue, “hallucinations”.
> AI hallucinations are one of the best bits of PR ever. The term reframes critical errors to anthropomorphise the machine, as that is essentially what an AI hallucination is: the machine getting it significantly and repeatedly wrong. Both MIT and METR found that the effort and cost required to look for, identify, and rectify these errors was almost always significantly larger than the effort the AI reduced.
> In other words, for AI (specifically generative AI) to be even remotely useful in the real world and have a hope in hell of generating revenue by augmenting workers at scale, let alone replacing them like it has promised to, it needs to cut “hallucinations” down to basically zero.
As someone who uses Claude 4.5 in Cursor every workday this rings extremely hollow. I am thinking to myself daily “I would have never had time to do this before.”
Have an idea for a script, you don’t have to lose a day building it. Wanna explore a feature, make a worktree and let the agent go. It’s fundamentally changed my workflow for the better and I don’t wanna go back, hallucinations and all.
I have no proof of this, but I would bet that there will be some quid-pro-quo involved between Trump and the pardonee. Trump does not usually give things away, he leverages his power to get more power/money.