Cost is a parameter subject to engineering tradeoffs, just like performance, feature sets, and implementation time.
Security and reliability are also parameters that exist on a sliding scale, the industry has simply chosen to slide the "cost" parameter all the way to one end of the spectrum. As a result, the number of bugs and hacks observed are far enough from the desired value of zero that it's clear the true requirements for those parameters cannot be honestly said to be zero.
> the number of bugs and hacks observed are far enough from the desired value of zero
Zero is not the desired number, particularly not when discussing "hacks". This may not matter in current situation, but there's a lot of "security maximalism" in the industry conversations today, and people seem to not realize that dragging the "security" slider all the way to the right means not just the costs becoming practically infinite, but also the functionality and utility of the product falling down to 0.
I know a lot of security researchers will disagree with this notion, but I personally think that security (& privacy, I'm going to refer to both as "security" for brevity here) are an overhead.
I think that's why it needs to exist *and be discussed* as a sliding scale. I do find a lot of people in this space chase some ideal without a consideration for practicality.
Mind, I'm not talking about financial overhead for the company/developer(s), but rather an UX overhead for the user. It often increases friction and might even need education/training to even make use the software it's attached to.
It's much like how body armor increases the weight one has to carry and decreases mobility, security has (conceptually) very similar tradeoffs (cognitive instead of physical overhead, and time/interactions/hoops instead of mobility). Likewise, sometimes one might pick a lighter Kevlar suit, whereas othertimes a ceramic plate is appropriate.
Now, body armor is still a very good idea if you're expecting to be engaged in a fight, but I think we can all agree that not everyone on the street in, say, a random village in Austria, needs to wear ceramic plates all the time.
The analogy does have its limits, of course ... for example, one issue with security (which firmly slides it towards erring on the safe side) as compared to warfare is that you generally know if someone shot at you and body armor saved you; with security (and, again, privacy), you often won't even know you needed it even if it helped you. And both share the trait that if you needed it and didn't have it, it's often too late.
Nevertheless, whether worth it or not (and to be clear, I think it's very worth it), I think it's important that people don't forget that this is not free. There's no free lunch --- security & privacy are no exception.
Ultimately, you can have a super-secure system with an explicit trust system that will be too much for most people to use daily; or something simpler (e.g. Signal) that sacrifices a few guarantees to make it easier to use ... but the lower barrier to entry ensuring more people have at least a baseline of security&privacy in their chats.
Both have value and both should exist, but we shouldn't pretend the latter is worthless because there are more secure systems out there.
You're not wrong. The entire cybersecurity industry is a joke. You cannot play defense forever. You can only dig a bunker so deep, walls so thick, and areas so compartmentalized, before you have dug your own tomb and suffocated inside of it, or opened up a backdoor by tunneling through to the other side of the planet.
It's called cyberwarfare for a reason. After mounting a defense, you're expected to stage an offense to proactively eliminate threats and deter future intrusions by fear of response. The only countries that understand this are Israel, Iran, Russia and China. The rest of us are content to burden ourselves with enduring global cyber-siege forever and acting like we're anything other than livestock building our own pens around us.
Today a bank really sent me a legitimate email about trying their new site. Went over, it was their site alright, logged in with correct username and password - poof, instantly blocked for suspicious access (from my usual home machine), call helpline to fix.
Reminds me of repl.it, which perma-blocked my newly created account before I even had a chance to type in e-mail verification code; in fact the notice about account block came before the one-time e-mail verification code.
I still wonder what did I do wrong (support isn't responsive). But it's true that we're both safe from having a user/vendor relationship now.
This happens to me when I inherit phone numbers previously flagged for fraud, then try to sign up for new services. My email is clean (I'm not a fraudster) but they ban me on sight based on association with a bad number since I never bother with the transfer process.
I always have to deal with this for the first year after changing carriers.
The thing with zero bugs is that software is very complicated not due it being harder than hardware but by simply that get some devs, POs, sys admins, devops and stuff and zero bugs will be defined entirely different.
For example, in theory the only real system with zero bugs would be one you use exactly always the same way, at the same place for the same exact goal and never change that.
Its a bit related to the old saying in cybersec " the msot secure system is the one who isnt used at all and not connected to anything" so basically a tradeoff with UX always. But who would want that?
I think thats why software on more actual mission critical systems are way more stable and bug free... still hate the word. Because it cant be avoided, since you see bugs sometiems are just situations when your uncontrolled actors (users, other services) use the system in a non-intented way so you try plan for that such us retry mechanisms, logging, backups etc.
Because when we further think about it, have you ever witnessed a system in real life thats bug free? Humans have bugs all around, buildings, cars, even nature.
So how would you expect we could do that, esp each random company? Also do we want that? What if we said there is a 100% defiend system we can make perfect in 100 years... Good, but whats the point?
Thats very true and I think about often because even in everyday task e.g. "We need a new feature to download reports" then you get a ticket but still depending how much that is used, desired, invested in or marketed and all those things, how flexible should I make, whats all the errors, whats the data, how secure and million other small or bigger decisions.
The point isn't how user stories are written, but realizing that everything is a compromise for practicality since if I wanted it to be the most secure thing ever, Id say "Lets just not offer that feature at all".
But now back to cars, I like that I heard some companies promised or started to build cars again with real buttons. In our world its when I try to use more and support the tools which aren't build on Electron (and slow as hell) but actually care about performance. Open source and decent enough security is already the minimum requirement.
The question was not if it was possible within price boundary X, but if it was possible at all.
There is a difference, please don't confound possibility with feasibility.
Is having problematic features that causes problems also a requirement?
The answer to the above question will reveal if someone an engineer or a electrician/plumber/code monkey.
In virtually every other engineering discipline engineers have a very prominent seat at the table, and the opposite is only true in very corrupt situations.
Also people keep insisting on using unsafe languages like C.
It depends on exactly what you are doing but there are many languages which are efficient to develop in if less efficient to execute like Java and Javascript and Python which are better in many respects and other languages which are less efficient to develop in but more efficient to run like Rust. So at the very least it is a trilemma and not a dilemma.
C is about the safest language you can choose, between cbmc, frama-c and coccinelle there is hardly another language with comparable tooling for writing actually safe software, that you can actually securely run on single-core hardened systems. I would be really interested to hear the alternatives, though!
JVM is fast for certain use cases but not for all use cases. It loads slowly, takes a while to warm up, generally needs a lot of memory and the runtime is large and idiosyncratic. You don't see lots of shared libraries, terminal applications or embedded programs written in Java, even though they are all technically possible.
The JVM has been extremely fast for a long long time now. Even Javascript is really fast, and if you really need performance there’s also others in the same performance class like C#, Rust, Go.
Hot take, but: Performance hasn’t been a major factor in choosing C or C++ for almost two decades now.
I think it is the perception of performance instead of the actual performance, also that C/C++ encroaches on “close to the metal” assembly for many applications. (E.g. when I think how much C moves the stack pointer around meaninglessly in my AVR-8 programs it drives me nuts but AVR-8 has a hard limit and C programs are portable to the much faster ESP32 and ARM.
A while back when my son was playing Chess I wrote a chess engine in Python and then tried to make a better one in Java which could respect time control, it was not hard to make the main search routine work without allocating memory but I tried to do transposition tables with Java objects it made the engine slower, not faster. I could have implemented them with off-heap memory but around that time my son switched from Chess to guitar so I started thinks about audio processing instead.
The Rust vs Java comparison is also pointed. I was excited about Rust the same way I was excited about cyclone when it came out but seeing people struggle with async is painful for me to watch and makes it look like the whole idea doesn’t really work when you get away from what you can do with stack allocation. People think they can’t live with Java’s GC pauses.
The language plays a role, but I think the best example of software with very few bugs is something like qmail and that's written in C. qmail did have bugs, but impressively few.
Write code that carefully however is really not something you just do, it would require a massive improvement of skills overall. The majority of developers simply aren't skilled enough to write something anywhere near the quality of qmail.
Most software also doesn't need to be that good, but then we need to be more careful with deployments. The fact that someone just installs Wordpress (which itself is pretty good in terms of quality) and starts installing plugins from un-trusted developers indicates that many still doesn't have a security mindset. You really should review the code you deploy, but I understand why many don't.
I was qmail fanbois back in the day and loved how djb wrote his own string handling library. I built things with qmail that were much more than an email server (think cgi-bin for web servers) and knew the people who ran the largest email installation in the world (not sure how good they were about opt-in…)
Djb didn’t allow forking and repackaging so quail did not keep up with an increasingly hostile environment where it got so bad that when the love letter virus came out it was insufficient to add content filtering to qmail and I had to write scripts that blocked senders at the firewall. Security was no longer a 0 and 1 problem, it was certainly possible to patch up and extend qmail to survive in that environment but there was something to say for having it all in one nice package…. And once the deliverability crisis started, I gave up on running email servers entirely.
qmail was a lot of fun, so was djbdns and daemontools, but you're right it failed to keep up and DJBs attitude didn't help.
We built a weird solution where two systems would sync data via email. Upstream would do a dump from an Oracle database, pipe it to us via SMTP and a hook in qmail would pick up the email, get the attachment and update our systems. I remember getting a call one or two years after leaving the organisation, the new systems administrator wanted to know how their database was always kept up to date. It worked brilliantly, but they felt unsafe not knowing how. I really should have documented that part better.
You should check out HLS and DASH. If you're already familiar and you're not using them because they don't meet your requirements, then apologies for the foolish recommendation. If not, this could solve your problem.
They're probably going to do an aerial insertion via helicopter (Ospreys technically), which doesn't require transiting Hormuz. These big amphibious assault ships are built for both maritime and aerial insertions.
You're right, you are using it wrong. An LLM can read code faster than you can, write code faster than you can, and knows more things than you do. By "you" I mean you, me, and anyone with a biological brain.
Where LLMs are behind humans is depth of insight. Doing anything non-trivial requires insight.
The key to effectively using LLMs is to provide the insight yourself, then let the LLM do the grunt work. Kind of like paint by numbers. In your case, I would recommend some combination of defining the API of the library you want yourself manually, thinking through how you would implement it and writing down the broad strokes of the process for the LLM, and collecting reference materials like a format spec, any docs, the code that's creating these packets, and so on.
> An LLM can read code faster than you can, write code faster than you can, and knows more things than you do.
I don't agree. It can't write code at all, it can only copy things it's already seen. But, if that is true, why can't it solve my problem?
> The key to effectively using LLMs is to provide the insight yourself, then let the LLM do the grunt work
Okay, so how do I do that? Remember, I want to do ZERO TYPING. I do not want to type a single character that is not code. I already know what I want the code to do, I just want it typed in.
I just don't think AI can ever solve a problem I have.
You're intentionally missing the point. Every time a bomb drops we're rolling the dice. Hits on civilian targets are inevitable, just like bugs are inevitable. The only solution is not to go to war at all. Don't blame the person who dropped the bomb, blame the people who ordered the bombs to be dropped.
There's a hell of a difference between "we don't like your terms so we're going to use a different supplier" and "we don't like your terms, so we're going to use the power of the federal government to compel you to change them". The president is the commander-in-chief of the military, but Anthropic is not part of the military! Outside serving the public interest in a crisis, the president has no right to compel Anthropic to do anything. We are clearly not in a crisis, much less a crisis that demands kill bots and domestic surveillance. This is clear overreach, and claiming a constitutional justification is mockery.
I'd encourage you to look up the Defense Production Act. Its powers are probably broad enough that the President could unilaterally force Anthropic to do this whether or not it wants to. It's the same logic that would allow him to force an auto manufacturer to produce tanks. And the law doesn't care whether we are in a crisis or not. It's enough that he determine (on his own) that this action is "necessary or appropriate to promote the national defense."
However, it looks like Trump isn't going to go that route-- they're just going to add Anthropic to a no-buy list, and use a different AI provider.
Ok? And? Trump could use the DPA to force Ford to make tanks in a war, just like how Trump could use the DPA to force Anthropic to make AI in a war. Are we in a war? No. We are not in a crisis.