Current AI coding assistants are not just a Stack Overflow replacement; they do a pretty good job writing whole projects without you touching the code. At the end, when the coding guidelines are not clearly defined, the AI-generated codebase becomes messy and unmaintainable. That is why, nowadays, it has become profitable for my team to fix the "vibe-coded" products that are working fine but have a shit ton of problems with scaling and implementation logic.
Security researchers have revealed 30+ CVEs affecting Claude Code, Cursor, GitHub Copilot and others via prompt injection and MCP tool poisoning; this article covers attack vectors, the OWASP agentic AI Top 10 and practical defences.
Living in SoCal, I almost always prefer to order online. Most local businesses are losing to their e-commerce competitors; no wonder commercial spaces are empty.
I have a side business of a small e-commerce shop. I would consider having physical space just for the sake of luxury, but now I would rather spend that monthly rent on marketing online rather than paying for physical space.
IMHO, that's what is happening. Bank problems or anything else are secondary; if it were profitable to be at the physical location for the businesses, other factors would vanish.
The article applies to all kinds of loans for property though.
Apartment complexes could also be 50% vacant and still "worth" their original value if the asking rents remain high.
Office buildings that got cleared out after covid, same thing.
Brick and mortar retail are the same.
The article is more of a criticism of how asset values are calculated and loans are managed to avoid foreclosure. Which results in financially valid buildings/loans that are underutilized because the other option is creating economic equilibrium at the cost of lenders and debt holders.
Based on the stability track record, I was more curious about how SQLite has done the anomaly testing. Sadly, the article has just a few words about it.
Truly one of the best software products! It is used on every single device, and it is just pure rock-solid.
I would say the AI consumption aspect was a side effect: the primary goal was to "generate" new stuff. So far, to me, the significant boost is the coding aspect. Still, for the rest of the people, I think you are right: 90% of the benefits come from being an interactive, conversational search on top of the available information that AI can read/consume.
Firefox has been lagging in Web features for a long time. I have been a Zen browser user for about a year, and recently moved back to Arc just because almost all interactive websites look bad on the Firefox engine; somehow, they don't have the same level of JS API support as Chrome does, especially for WebRTC, Audio, or Video. And this is frustrating that they think the problem is the AdBlockers!
IMHO, this is not too bad! But obviously, coming from the software product industry, everyone knows that building features isn't the same as operating in practice and optimizing based on the use case, which takes a ton of time.
Waymo has a huge head start, and it is evident that the "fully autonomous" robotaxi date is far behind what Elon is saying publicly. They will do it, but it is not as close as the hype suggests.
Thanks for the context, I didn't realize the supervisor sits in the passenger seat in Austin. They do have a kill switch / emergency brake, though:
> For months, Tesla’s robotaxis in Austin and San Francisco have included safety monitors with access to a kill switch in case of emergency — a fallback that Waymo currently doesn’t need for its commercial robotaxi service. The safety monitor sits in the passenger seat in Austin and in the driver seat in San Francisco
Waymo absolutely has a remote kill switch and remote human monitors. If anything Tesla is being the responsible party here by having a real human in the car.
TBH, the idea seems way outdated for the current state of software engineering. The Rust compiler provides a massive benefit for AI Coding because it literally catches all the failure cases, so all AI have to do is implement the logical parts, which is usually a no-brainer for something like a Claude Code or Codex.
For example, the https://github.com/SaynaAI/sayna has been mostly Claude Code + me reviewing the stuff + some small manual touches if needed, but for the most part, I have found that Claude Code writes way more stable Rust code than JS.
It would be easier and safer to give the JS code to a translator and have it translate it into Rust, and then continue AI Dev with Rust, than to invest time in an automated compiler from JS to Rust. IMHO!
I’ve heard it said and I won’t argue your personal experience.
However, I don’t see it that way at all.
I find claude much more capable of writing large chunks of python or react/js frontend code than writing F#, a very statically type-checked language.
It’s fine, but a lot more hand-holding is needed, a lot more tar pits visited.
If anything, it seems a popularity contest of which language features the most in training data. If AI assistance is the goal, everyone should write Python and Javascript.
I’ve worked with relatively large projects in TypeScript, Python, C#, and Swift, and I’ve come to believe the more opinionated the language and framework, the better. C# .NET, despite being a monster, was a breath of fresh air after TS. Each iteration just worked. Each new feature simply gets implemented.
My experience also points to compiled languages that give immediate feedback on build. It’s nearly impossible to stop any AI agent from using 'as any' or 'as unknown as X'casts in TypeScript - LLMs will “fix” problems by sweeping them under the rug. The larger the codebase, the more review and supervision is required. TS codebase rots much faster then rust/C#/swift etc.
You can fix a lot of that with a strict tsconfig, Biome and a handful of claude.md rules, I’ve found. That said, it’s been ages since I wrote a line of C#, but it remains the most productive language I’ve used. My TypeScript productivity has only recently begun to approach it.
reply