Non-permissive licenses, open core and proprietary software will just not survive. There is no reality in which I or anyone in my community would use something like eg. raycast or the saas email clients that someone locks down and does rent extraction and top down decisions on. The experience of being able to change anything about the software i use with a prompt while using it is impossible to come back from to all the glitches, limitations and stupidities. we have to come to terms with infinite software.
Just what absolutely no one needed: another locked down and non web platform with horrific security that tries to digitally enslave people just the tiniest level above what they can accept now. I don’t see any future where raycast can survive and i would say its a good thing.
I understand some of the skepticism towards this product, but are you saying this will somehow negatively impact Raycast (the company)? Raycast the tool is incredibly useful, so I'm surprised to see this sentiment.
I am saying its as toxic as the main product of raycast and they got away with it in a world where people could not replicate apps and 100 plugins they use in days. There is zero possibility anyone i know will tolerate a locked ecosystems like this any longer than absolutely needed.
There is the same divide starting to form that NFTs had back in the day. Tech bros instantly like if something has claw in the name, the rest of us will dismiss anything with that naming and philosophy as toxic slop culture. will be interesting to see how far this one will go.
Its just another example and just a detail in the broader story: We cannot trust any model provider with any tooling or other non model layer on our machines or our servers. No browsers, no cli, no apps no whatever. There may not be alternatives to frontier models yet, but everything else we need to own as true open source trustable layer that works in our interest. This is the battle we can win.
Why don't people form cooperatives, contribute to buy serious hardware and colocate them in local data centers, and run good local models like GLM on them to share?
We are starting to! TBH it will take some time until this is feasible at larger scale but we are running a test for this model in one of my community groups.
This take is so incredibly short sighted. Sure mcp is not perfect and needs better tooling and a bit updated standards, but clis are >maybe< just the future for agents that are clis themselves but i would argue these agents will be not the mainstream future but a niche i call "low level system agents" or things for coding bros. An agent of the future needs to be way more secure, auditable, reasonable and controllable none of which is possible by slapping a cli with execution rights into a container even with a bubblewrap profile. An agent of the future will run in a sandbox similar to cloudflare workers/workerd isolate with capabilities. The default will be connecting one central MCP endpoint to an agent that runs in its own sandbox without direct access to the systems it works on. The MCP gateway handles all the things that matter, connecting LLM providers, tokens for APIs, enforcing policies, permission requests, logging, auditing, threat detection and also tools. Tools execute on the container level, so there is not even a need to change anything about any existing containerised workloads, its all transparently happening in the container realm. I am not saying system level agents have no use but any company running anything like kubernetes or docker compose will have zero need or tolerance for an agent like that.
Can we please not change the meaning of chat to mean agent interface? It was painful to see crypto suddenly meaning token instead if cryptography. Plus i really dont want to “chat” with ai. its a textual interface
Fair point, although I think we have OpenAI to blame for that - for buying chat.com and pointing it to the most popular textual AI interface of them all :)
Its an interesting direction if you see it under the umbrella of diminishing costs: You build a product once with vibe coding and a design/ product hat. Once you know what works you rebuild it 100% in a framework like this. You do this every time from scratch when the tech debt or the mismatch between architecture and needs are too big.
You could also use the same framework always - that's what I'm doing anyway. But you gotta remember that no matter how well you spec it, first iteration of the specs is going to suck anyway.
But you vibe-code it anyway and see what happens. You'll start noticing obvious issues that you can track back to something in spec.
Then you throw away the entire thing (the entire project!) and start from scratch. Repeat until you have something you like.
Incremental specing doesn't work though. You need a clean room approach with only important learnings from previous iterations. Otherwise agent will never pick a hard but correct path.
I keep reading these unfair comparisons mixing many different problems into a naive story in favour of clis.
First of all no one should still consider connecting mcps directly to agents, this is completely outdated, you connect mcps and tools to a single gateway that has an api, handles federation, auditing, prolicies and much more. A good gateway exposes a tiny minimal context with just instructions how to query what is available and has a configurable "eager" flag for the things that should be put eagerly into the context for certain agent profiles.
Secondly many many mcp servers are outdated as they were build for way dumber models than what we have today and will have overly heavy context and descriptions that slow down and degrade the current frontier models. If you compare a cli to a state of the art agent gateway setup with adjustments for the current models, you will find that the only advantage for clis is operational complexity.
Its funny how many variations of meaning people assign to agent related terms. Conflating agent with cli and as opposite spectrum of ide is a new one i did not encounter before. I run agents with vscode-server also in a vm and would not give up the ability to have a proper gui anytime i feel like and also being able to switch seamless between more autonomous operation and more interactive seems useful at any level.
reply