What are the paradigms people are using to use AI in helping generate better specs and then converting those specs to code and test cases? The Kiro IDE from Amazon I felt was a step in the direction of applying AI across the entire SDLC
An agentic media player, intended as home media server for.. uhh.. seasonal vacation videos with subtitles. I've experimented a lot with different "levels" of AI automation, starting from simple workflows, to more advanced ones, and now soon to fully agentic.
Pretty good practice project! All written in Go with minimal dependencies and an embedded vanillja-js frontend built into the binary (it's so small it's negligable)
This seems pretty knee-jerk. I do most of this and have delivered a hell of a lot of software in my life. Many projects are still running, unmodified, in production, at companies I’ve long since left.
You can get a surprising amount done when you aren’t spending 90% of your time fighting fires and playing whack-a-mole with bugs.
Well, I'm sure you're well aware of perils of premature optimization and know how to deliver a product within a reasonable timeframe. TigerStyle seems to me to not be developed through the lens of producing value for a company via software, but rather having a nice time as a developer (see: third axiom).
I'm not saying the principles themselves are poor, but I don't think they're suitable for a commercial environment.
I had the same association but interestingly this version appears to be a "remix" of TigerBeetle's style guide, by an unrelated individual. At a glance, there is a lot of a crossover but some changes as well.
I think the point is well made though. When you're building something like a transactions database, the margin for error is rather low.
Then I'm curious about what their principles on deadlines. I don't see how it aligns with their coding styleguide. Taking the TigerStyle at face value does not encourage deliverance. They're practically saying "take the time you need to polish your thing to perfection, 0 technical debt".
But ofc, I understand styleguides are... well.. guides. Not law.
I don't really see anything in it that particularly difficult our counter-productive. Or, to be honest, anything that isn't just plain good coding practice. All suitably given as guidelines not hard and fast rules.
The real joy of having coding standards, is that it sets a good baseline when training junior programmers. These are the minimum things you need to know about good coding practice before we start training you up to be a real programmer.
If you are anything other than a junior programmer, and have a problem with it, I would not hire you.
I thought the idea was to isolate the concerns, so that you have a GitHub agent, and a Linear agent, and a Slack agent independently, and that these agents converse to solve the problem?
The monolith agent seems like a generalist which may fail to be good enough at anything. But what do I know
Say you do have those sub-agents, they will likely each have tools, and sometimes many, in which case you'll have you route to those tools somehow. The sub-agents themselves are also almost like tools from the main root agent's perspective, and there may be many of those, which you also have to route to, in which case you can use this pattern again. Put simply, sometimes increasing the hierarchy is not the right abstraction vs having many tools in one hierarchy, and thus the need for more efficient routing.
On the pros of fingerprinting: it's practically the only consistent tool to prevent malicious use in certain usecases, such as app hosting and similar bot protection.
Email validation doesn't work. Ip blocking doesn't work. Captcha? Kind of. Fingerprinting? Very efficient.
Its efficient until you get bots that rotate fingerprints with every request. Then you need to move to behavioural metrics to see if they look different to regular users of the site.
reply