If it can run kubectl it can run any other command too. Unless you're running it as a different user and have put a bit of thought into limiting what that user can do, that's likely too much leeway.
That's only really relevant I'd you're leaving it unattended though.
this is more about the service account than the runtime environment i think. you put your admin service account in docker the agent can still wreak havoc. Docker lets you hide the admin service account on your host FS from the agent.
Keeping the powerful credentials where the agent can't reach them does buy you a bit of safety. But I still think its a bit loose when compared with exposing an API to the model which can only do what you intend for that model to do.
Not sure about the MCP, but I find that using something (RAG or otherwise provide docs) to point the LLM specifically to what you're trying to use works better than just relying on its training data or browsing the internet. An issue I had was that it would use outdated docs, etc.
Claude is, some models aren't. In some cases the MCPs do get the models to use tools better as well due to the schema, but I doubt kubectl is one of them (using the git mcp in claude code... facepalm)
Yeah fair enough lol…usually I end up building model-optimized scripts instead of mcp which just flood context window with json and uuids (looking at you, linear) - much better to have Claude write 100 lines of ts to drop a markdown file with the issue and all comments and no noise
does it? Did you forget the prompts? MCP is just a protocol for tool/function calling which in turn is part of the prompt, quite an important part actually.
Did you think AI works by prompts like "make magic happen" and it... just happens? Anyone who makes dumb arguments like this should not deserve a job in tech.
I allowed Claude to debug an ingress rule issue on my cluster last week for a membership platform I run.
Not really the same since Claude didn’t deploy anything — but I WAS surprised at how well it tracked down the ingress issue to a cron job accidentally labeled as a web pod (and attempting to service http requests).
It actually prompted me to patch the cron itself but I don’t think I’m that bullish yet to let CC patch my cluster.
oh yeah we had claude diagnose a production k8s redis outage last week (figured out that we needed to launch a new instance in a new AZ to pick up the previous redis' AZ-scoped EBS PVC after a cluster upgrade).