The moat was never really the code....it was always the understanding that produced it. All those failed experiments and design iterations that led to the algorithm... that knowledge lives in your head, not the repo. Someone can clone the implementation, but they can't clone the reasoning behind every decision you made.
The audience problem is separate from the IP problem and worth solving on its own, start writing about the problem you're solving before you open-source anything.
Yes, and the main change is i now weight specificity much more heavily.
The things that still feel trustworthy are very specific details, opinions that could get someone in trouble, and writing that has a clear point of view. Generic correctness is cheap now, so it doesn't carry the same weight it used to.
For computer vision and ML work specifically, RHEL doesn't give you much advantage over Ubuntu or even Fedora for personal use. Most ML tooling like PyTorch, CUDA drivers, Jupyter is better supported and easier to set up on Ubuntu, and the community resources are much larger.
if you want something RHEL-adjacent without the overhead, Rocky Linux or AlmaLinux give you the same base for free and are worth a look.
Judgment in ambiguous situations is the one thing that's held up consistently. AI is good at defined tasks, bad at knowing when the task definition itself is wrong.
Also, deep domain knowledge is the other one..... knowing what good output looks like in your field is something models can't fake convincingly at the edges.
Thank you, someone pointed out this. Using claude has become a pain in my ass now. I used to use it for writing but now it has lost its creative thinking. In addition, its free plan daily limit gets easily exhausted with 2 prompts...makes me think after getting all the hype, what happens to these platforms
Most of these tools aren’t defensible because of AI, they’re defensible because of workflow capture. The model is the easy part now, the hard part is owning the context, the data, and the place where work actually happens.
That’s why a lot of them feel same kind on the surface. Same models, slightly different UI. But the ones that stick quietly wedge themselves into a real job and make it 5x faster or cheaper.
Building Gordon (https://trygordon.ai/) because most companies “have security” but no idea what their actual risk is.
We’re building a platform that helps you pass audits faster, lower cyber insurance premiums, and makes cyber less of a fire drill.
Its like one place to see risk, catch threats, test what breaks, track vendors, train people, and not get hacked at 2 AM.
My perspective is different here. You were right about the VR thing, but the AI tools on which we are relying today will no longer be useful for us 2 years later. I'll give it a term of "graveyard of AI startups". the gap is more about timing, every AI tool which is being made today is built to give the capability of 3-4 combined AI tools.
The graveyard is very large and it will keep expanding.
The audience problem is separate from the IP problem and worth solving on its own, start writing about the problem you're solving before you open-source anything.