Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wait. This doesn’t make sense to me. Statically typed programming languages cannot be deployed nor can they run with a type error that happens at runtime. Untyped languages CAN run and error out with a type error AT runtime. The inevitable consequence of that truth is this:

In the spectrum of runtime errors statically typed languages mathematically and logically HAVE less errors. That by itself is the definition of more reliable. This isn’t even a scientific thing related to falsifiability. This comes from pure mathematical logic. In science nothing can be proven, things can only be falsified. But in math and logic things can be proven and it is provable that static types are more reliable than untyped.

It is definitely not vibes and feels. Not all of banking uses statically typed languages but they are as a result living with a less reliable system then the alternative and that is a logical invariant.

There are many reasons why someone would choose untyped over typed but reliability is not a reason why they would do this unless they are ignorant.





> Statically typed programming languages cannot be deployed nor can they run with a type error that happens at runtime.

This is so completely untrue that I'm confused as to why anyone would try to claim it. Type Confusion is an entire class of error and CVE that happens in statically typed languages. Java type shenanigans are endless if you want some fun but baseline you can cast to arbitrary types at runtime and completely bypass all compile time checks.

I think the disagreement would come additionally by saying a language like Ruby doesn't actually have any type errors. Like how it can be said that GC languages can't have memory leaks. And that this model is stronger than just compile time checking. Sure you get a thing called TypeError in Ruby but because of the languages dynamism that's not an error the way it would be in C. You can just catch it and move. It doesn't invalidate the program's correctness. Ruby is so safe in it's execution model that Syntax Errors don't invalidate the running program's soundness.


> Java type shenanigans are endless if you want some fun but baseline you can cast to arbitrary types at runtime and completely bypass all compile time checks.

For this reason Java is a bad example of a typed language. It gives static typing a bad rep because of its inflexible yet unreliable type system (only basic type inference, no ADTs, many things like presence of equality not checked at compile time etc ) Something like ocaml or fsharp have much more sound and capable type systems.


Like other people replying to you C++ and Java gave types a bad rep by being so error prone and having a weak type system.

What I am saying is not untrue. It is definitive. Java just has a broken type system and it has warped your view. The article is more talking about type systems from functional programming languages where type errors are literally impossible.

You should check out elm. It’s one of the few languages (that is not a toy language and is deployed to production) where the type system is so strong that run time errors are impossible. You cannot crash an elm program because the type system doesn’t allow it. If you used that or Haskell for a while in a non trivial way it will give you deeper insight into why types matter.

> Ruby is so safe in it's execution model that Syntax Errors don't invalidate the running program's soundness.

This isn’t safety. Safety is when the program doesn’t even run or compile with a syntax error. Imagine if programs with syntax errors still tried their best effort to run… now you have a program with unknown behavior because who knows what that program did with the syntax error? Did it ignore it? Did it try to correct it? Now imagine that ruby program controlling a plane. That’s not safe.


There are different levels of static typing

This logic is both too broad and rigid to be of much practical use[1]. It needs to be tightened to compare languages that are identical except for static type checks, otherwise the statically typed language could admit other kinds of errors (memory errors immediately come to mind) that many dynamic languages do not have and you would need some way of weighing the relative cost to reliability of the different categories of errors.

Even if the two languages are identical except for the static types, then it is clearly possible to write programs that do not have any runtime type errors in the dynamic language (I'll leave it as an exercise to the reader to prove this but it is very clearly true) so there exist programs in any dynamic language that are equally reliable to their static counterpart.

[1] I also disagree with your definition of reliability but I'm granting it for the sake of discussion.


You’re objecting on the wrong axis.

The claim was about reliability and lack of empirical evidence. Once framed that way, definitions matter. My argument is purely ceteris paribus: take a language, hold everything constant, and add strict static type checking. Once you do that, every other comparison disappears by definition. Same runtime, same semantics, same memory model, same expressiveness. The only remaining difference is the runtime error set.

Static typing rejects at compile time a strict subset of programs that would otherwise run and fail with runtime type errors. That is not an empirical claim; it follows directly from the definition of static typing. This is not hypothetical either. TypeScript vs JavaScript, or Python vs Python with a sound type checker, are real examples of exactly this transformation. The error profile is identical except the typed variant admits fewer runtime failures.

Pointing out that some dynamic programs have no runtime type errors does not contradict this. It only shows that individual programs can be equally reliable. The asymmetry is at the language level: it is impossible to deploy a program with runtime type errors in a sound statically typed language, while it is always possible in a dynamically typed one. That strictly reduces the space of possible runtime failures.

Redefining “reliability” does not change the result. Suppose reliability is expanded to include readability, maintainability, developer skill, team discipline, or development velocity. Those may matter in general, but they are not variables in this comparison. By construction, everything except typing is held constant. There is literally nothing else left to compare. All non-type-related factors are identical by assumption. What remains is exactly one difference: the presence or absence of runtime type errors. At that point, reliability reduces to failure count not as a philosophical choice, but because there is no other dimension remaining.

Between two otherwise identical systems, the one that can fail in fewer ways at runtime is more reliable. That conclusion is not empirical, sociological, or debatable. It follows directly from the setup.


> Wait. This doesn’t make sense to me. Statically typed programming languages cannot be deployed nor can they run with a type error that happens at runtime. Untyped languages CAN run and error out with a type error AT runtime. The inevitable consequence of that truth is this

There is nothing inevitable about the consequence you’re imagining because statically typed languages also reject correct programs.


It is 100 percent inevitable. Your reasoning here is illogical.

How does a statically typed language rejecting a correct program affect reliability? The two concepts are orthogonal. You’re talking about flexibility of a language but the topic is on reliability.

Let me be clear… as long as a language is Turing complete you can get it to accomplish virtually any task. In a statically typed language you have less ways to accomplish the same task then a dynamically typed language; but both languages can accomplish virtually any task. By logic a dynamically typed language is categorically more flexible than a static one but it is also categorically less reliable.


>How does a statically typed language rejecting a correct program affect reliability?

Because in some cases it will reject code that is simple and obviously correct, which will then need to be replaced by code that is less simple and less obviously correct (but which satisfies the type checker). I don't think this happens most of the time, but it does mean that static typing isn't a strict upgrade in terms of reliability. You are paying for the extra guarantees on the code you can write by giving up lots of correct programs that you could otherwise have written.


>I don't think this happens most of the time, but it does mean that static typing isn't a strict upgrade in terms of reliability.

It is a strict upgrade in reliability. You're arguing for other benefits here, like readability and simplicity. The metric on topic is reliability and NOT other things like simplicity, expressiveness or readability. Additionally, like you said, it doesn't happen "most" of the time, so even IF we included those metrics in the topic of conversation your argument is not practical.

>You are paying for the extra guarantees on the code you can write by giving up lots of correct programs that you could otherwise have written.

Again the payment is orthogonal to the benefit. The benefit is reliability. The payment is simplicity, flexibility, expressiveness, and readability. For me, personally, (and you as you've seem to indicate) programs actually become more readable and more simple when you add types. Expressiveness and flexibility is actually a foot gun, but that's not an argument I'm making as these are more opinions and therefore unprovable. You're free to feel differently.

My argument is that in the totality of possible errors, statically typed programs have provably LESS errors and thus are definitionally MORE reliable than untyped programs. I am saying that there is ZERO argument here, and that it is mathematical fact. No amount of side stepping out of the bounds of the metric "reliability" will change that.


Your definition of reliability seems different to how people use the word. I think most would consider a program that was statically checked, but often produces a wrong result as less reliable than a dynamically checked program that produces the right result.

>My argument is that in the totality of possible errors, statically typed programs have provably LESS errors and thus are definitionally MORE reliable than untyped programs. I am saying that there is ZERO argument here, and that it is mathematical fact. No amount of side stepping out of the bounds of the metric "reliability" will change that.

Making such broad statements about the real world with 100% confidence should already raise some eyebrows. Even through the lens of math and logic, it is unclear how to interpret your argument. Are you claiming that sum of all possible errors in all runnable programs in a statically checked language is less than sum of all possible errors in all runnable programs in an equivalent dynamically checked language? Both of those numbers are infinity, although i remember from school that some infinities are greater than others, I'm not sure how to prove that. And if such statement was true, how does it affect programs written in the real world?

Or is your claim that a randomly picked program from the set of all runnable statically checked programs is expected to have less errors than randomly picked program from the set of all runnable dynamically checked programs? Even this statement doesn't seem trivial, due to correct programs being rejected by type checker.

If your claim is about real world programs being written, you also have to consider that their distribution among the set of all runnable programs is not random. The amount of time, attention span and other resources is often limited. Consider the act of twisting an already correct program in various ways to satisfy the type checker, Consider the time lost that could be invested in further verifying the logic. The result will be much less clear cut, more probabilistic, more situation-dependent etc.


I think the disagreement here comes from overcomplicating what is actually a very simple claim.

I am not reasoning about infinities, cardinalities of infinite sets, or expectations over randomly sampled programs. None of that is needed. You do not need infinities to see that one set is smaller than another. You only need to show that one set contains everything the other does, plus more.

Forget “all possible programs” and forget randomness entirely. We only need to reason about possible runtime outcomes under identical conditions.

Take a language and hold everything constant except static type checking. Same runtime, same semantics, same memory model, same expressiveness. Now ask a very concrete question: what kinds of failures can occur at runtime?

In the dynamically typed variant, there exist programs that execute and then fail with a runtime type error. In the statically typed variant, those same programs are rejected before execution and therefore never produce that runtime failure. Meanwhile, any program that executes successfully in the statically typed variant also executes successfully in the dynamic one. Nothing new can fail in the static case with respect to type errors.

That is enough. No infinities are involved. No counting is required. If System A allows a category of runtime failure that System B forbids entirely, then the set of possible runtime failure states in B is strictly smaller than in A. This is simple containment logic, not higher math.

The “randomly picked program” framing is a red herring. It turns this into an empirical question about distributions, likelihoods, and developer behavior. But the claim is not about what is likely to happen in practice. It is about what can happen at all, given the language definition. The conclusion follows without measuring anything.

Similarly, arguments about time spent satisfying the type checker or opportunity cost shift the discussion to human workflow. Those may matter for productivity, but they are not properties of the language’s runtime behavior. Once you introduce them, you are no longer evaluating reliability under identical technical conditions.

On the definition of reliability: the specific word is not doing the work here. Once everything except typing is held constant, all other dimensions are equal by assumption. There is literally nothing else left to compare. What remains is exactly one difference: whether a class of runtime failures exists at all. At that point, reliability reduces to failure modes not by preference or definition games, but because there is no other remaining axis. I mean everything is the same! What else can you compare if not the type errors? Then ask the question which one is more reliable? Well… everything is the same except one has run time type errors, while the other doesn’t… which one would you call more “reliable”? The answer is obvious.

So the claim is not that statically typed languages produce correct programs or better engineers. The claim is much narrower and much stronger: holding everything else fixed, static typing removes a class of runtime failures that dynamic typing allows. That statement does not rely on infinities, randomness, or empirical observation. It follows directly from what static typing is.


Readability and simplicity can increase reliability, because simple readable code is easier to review.

From a practical perspective, most programmers will agree with me when I say that static types are more readable.

Sure. That’s a different kind of argument from the absolutist argument you were making earlier.

Right and it’s completely off topic. This is a tangent you decided to turn the conversation toward. Tangents are fine, I’m just saying that you are wrong on both the main topic and the tangent, which is also fine.

The point is that a dynamic language will in some cases enable code that is simpler and more readable (and hence probably more reliable) because sometimes the simplest code is code that wouldn’t type check. Even if statically typed languages are more readable on average, this fact invalidates your claim that statically typed languages are strictly better in terms of reliability. This can only be true if you artificially restrict attention to the subset of programs in the dynamic language that could have been statically typed.

By the way, could you tone down the rhetoric a notch?


My claim is not invalid. It’s just being evaluated against a different question.

The original post says that claims like “static typing improves reliability” are unfalsifiable and therefore just vibes. That’s false, because the claim being made is not empirical to begin with. It’s a statement about language semantics.

Holding everything else constant, static typing eliminates a class of runtime failures by construction. Programs that would fail at runtime with type errors in a dynamic language are rejected before execution in a statically typed one. That is not a hypothesis about the real world, teams, or productivity. It’s a direct consequence of what static typing is. No evidence or falsification is required.

When you argue that dynamic languages can sometimes enable simpler or more readable code that may be more reliable in practice, you’ve changed the claim. That’s a discussion about human factors and development process. It may be true, but it does not invalidate the original claim, because it addresses a different level of analysis.

Additionally in practice it isn’t true. Your entire argument flips context and is trying to point out a niche corner case to show that my overall correct argument is not absolute. You’re already wrong practically, and you’re also wrong absolutely.

So the correct framing is:

- At the language level, static typing is strictly more reliable with respect to runtime type errors.

- At the human/process level, tradeoffs exist, and outcomes can vary.

Calling the first claim “invalid” only works if you silently replace it with the second. That’s the source of the disagreement.


You can't "hold everything else constant" because the set of programs that satisfy whatever type system is a proper subset of the set of valid programs.

That’s a very long and indirect jump to make.

The claim that admitting a larger set of programs improves reliability goes through several speculative steps: that the extra programs are correct, that they are simpler, that simplicity leads to fewer mistakes, and that those mistakes would not have been caught elsewhere. None of that follows from the language semantics. It’s a human-factor argument layered on top of assumptions.

By contrast, static typing removing a class of runtime failures is immediate and unconditional. Programs that would fail at runtime with type errors simply cannot execute. No assumptions about developer skill, code style, review quality, or time pressure are needed.

Even in practice, this is why dynamic languages tend to reintroduce types via linters, contracts, or optional typing systems. The extra expressiveness doesn’t translate into higher reliability; it increases the error surface and then has to be constrained again.

So the expressiveness argument doesn’t invalidate the claim. It changes the topic. One side is a direct property of the language. The other is a speculative, multi-step causal story about human behavior. That’s why the original claim is neither unfalsifiable nor “just vibes.”

So regardless of speculative human factors, the claim stands: holding the language semantics constant, static typing strictly reduces the set of possible runtime failures, and therefore strictly increases reliability in the only direct, non-contingent sense available.


Also Did you read what I wrote? I Covered your argument here DIRECTLY in my response. It's like you read the first sentence and then responded while ignoring the next paragraph.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: