Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The road to Dart 3: A fully sound, null safe language (medium.com/dartlang)
108 points by markdog12 on Dec 8, 2022 | hide | past | favorite | 205 comments


"For Dart, we chose the path of sound null safety."

Null Safety is the killer feature of modern programming languages like Swift and Kotlin. It provides a clarity you just can't get otherwise and saves you from runtime errors. And it requires very little additional ceremony to use.


I'd say it is more like a basic requirement than a killer feature at this point.


While I love null safety, It's hardly a given. For Dart in particular, it sounds like there was a serious discussion about the choice, and it could have gone the other way.

(edit: removed incorrect statement about Python's type safety)


> For Dart in particular, it sounds like there was a serious discussion about the choice

Yes, we debated it for years.

Literally the day we launched, a user filed an issue requesting support for null safety: https://github.com/dart-lang/sdk/issues/22

For most of Dart's history, that was the #1 upvoted issue on the issue tracker.

Back in 2011 before I worked directly the language, I proposed null safety:

http://journal.stuffwithstuff.com/2011/10/29/a-proposal-for-...

I'm immensely glad we finally did it, even though the migration has been a ton of work.



FWIW my (1 year old) experience with mypy was very poor. Stuff breaking between releases, wrong resolution, bugs. It kinda felt like a 3rd party fun project, not an official high quality solution.

The fact that the source can contain completely wrong declarations and Python will happily run it anyway felt really bad. It's kinda like JavaDoc rather than a classic type system.


Languages like Java, Go and C were all designed before it became obvious that `Option<>` or `?` were obviously the right thing to do.

You can live with it, but most language eventually add some way to deal with it. Java has @NonNull. Javascript and Python both have static type annotations that are non-nullable. Even C++ has some attempt at it (std::option<>).

The only one I know of that hasn't bothered trying is Go.


C yes, Standard ML was already a thing when Java and Go came to be, and even plenty of other ML derived languages.


Go absolutely wasn't. It has been obvious since ML and Haskell.


But only to ML and Haskell programmers. Not it is obvious to basically everyone. (Excluding people who just haven't discovered static typing at all yet.)


The designers of Go should have known this. It was designed 20-30 years after ML/Haskell.


But the absence of nulls is the feature, not the presence of Option.


Zero-values and pointers-as-options is not even half as good as Option<T>. At best, it was easy to implement, and convenient to use, until you actually had to run the program.


I feel the same. My problem is that my teammates (future and past) will continue using null deliberately. Makes it a bit hard to eliminate.


Go doesn't have null-safety, so it seems more like a nice-to-have based on Go's success.


I dunno, we had a terrible experience with Go nils at my current job:

- It's super-easy to create a situation where some some obscure code fragment can cause a panic and crash the whole app due to nil pointer access, especially in multithreaded scenarios

- We had to start using linters to track possible nil errors early on

- Eventually we just moved to Kotlin, where this is not an issue

- And don't get me started on nil interfaces, good luck properly checking nilability and tracking these kind of issues


An other fun one is that methods may or may not receive nils depending whether they’re defined with a value or a pointer receiver.


I mean all of those things are downsides to having unsafe nullability. However, my point was that those things obviously aren't deal breakers for many, many people as Go is still extremely popular.


Please don't make new accounts to reply to a single comment.


I feel null isn't the real problem, the problem is the lack of semantics attached to null.

NotNull, or so called "null safety" makes it that when you know something is not optional, if a null is passed to it, you know it's a bug.

But when you do have null again, by making something nullable, the million dollar problem is back, in that you don't know why it's null?

Is it null because it's missing, false, errored, eof, leaf node, etc. You don't know.

In my opinion what we would need is union types, and simply never use null for anything, remove it completely.

Instead you could declare a function returns a Value or Missing. Or it would return a Value or False. Or it would return a Value or EndOfFile.

And similarly you could say:

    User {
     String|NotProvided email;
    }
Where email on user can be either a String or NotProvided.

You wouldn't want the NotProvided, EndOfFile, and all that to be full on classes though, just like type aliases of some sort, similar to null in a way, but it adds semantics.

    if (email == NotProvided) {
      ...
    }
And the language compiler would error if you try to use it where it doesn't work without checking:

    String email = user.email; // Error

    if (user.email != NotProvided) 
    {
      String email = user.email
    } // No error


That's exactly why I love Rust. Of course, if you don't want to write good Rust or are lazy, you can "reimplement" null with the Option enum and unwrap() it everytime, so your code panics everytime, or do the same by returning errors without thinking about if your code should move on. You still have to think about it, but at least you're not doing null checks AND Option.NONE checks, which is extremely tedious in Java, for instance.

So, for a quick script or rapid prototyping, just unwrap/panic if you don't care about the result, like in a short code contest, and for any other use, refrain from using these, match all possibilities, implement a strongly typed Error management.


Typescript does pretty much exactly what you want. Also try Rust. It's close to what you want.


basically that's how it works in Kotlin (and Dart and others) already. String? is String|null and it's a compiler error to reference a member without a prior null check.

In Kotlin (and others) it's extremely ergonomic, too. You just say foo?.bar and coalesce with ?: so

val baz = foo?.bar ?: "foo or bar is null"

In certain cases the Kotlin compiler will infer non-nullable type after an explicit null check just like in your example such that

fun bestFunction(foo : String?) {

    if(foo == null) return
    foo.baz <- //now legal to call without .?

}


Not really. The gp is complaining about the lack of semantics about what a missing value means. `String?` still doesn't tell me what not having a string means. `String|NotProvided` or `String|NotFound` or `String|InvalidInput` all have the same functionality as `String|null` but tells me something about what the missing value is indicating.


I mean, generally I would love (more) union types, unfortunately so few languages have them.

For Ops requirement I think it's too specific for a language level feature. In Kotlin you'd just use a sealed type in the cases were you really need to know why function can't produce a value. Works like a charm.


They are so modern, catching up with ML in 1976....


I don’t know, it is quite trivial to statically analyze nulls, so even with basic IDE analysis I don’t even remember the last time I got an NPE in Java. To put such a basic thing as a “killer feature” doesn’t make for a good advertisement :D


I mean... the type checker is a static analyzer. Why not bake the most commonly seen type error into the language's type checker?

For all of its flaws, at least C++ made null references undefined behavior. Coming from (pre-NotNull) Java, it's nice to avoid lots of defensive null-check boilerplate.


Static analysis can go only so far. The Intellij has been helpful, but far from sufficient. One of the fundamental problems is also that static analysis won't help you differentiate between an intent (the method should not return null) and a mistake (it returns null anyway).


@NotNull/@Nullable is not part of the standard lib, but is a de facto standard nonetheless. I found that defaulting to everything non-nullable (there is a setting in these tools which should be the default, implicit nullable or non-nullable) and providing the few places where you want to use null explicitly gives you effectively the same null safety as Kotlin (note that the standard lib is annotated by these tools)


> @NotNull/@Nullable is not part of the standard lib, but is a de facto standard nonetheless.

Which one is the standard, the dormant JSR-305, the IntelliJ annotations or the checker framework (I guess that's the best candidate since it was actually accepted)?

> I found that defaulting to everything non-nullable (there is a setting in these tools which should be the default, implicit nullable or non-nullable)

Does this have a good cross-IDE support?

> gives you effectively the same null safety as Kotlin

"effectively", just ergonomically way worse.

But the main issue I have with this is that it requires you to actively seek null safety, pick a particular "standard" and push it through, ruthlessly enforce it during a code review. Which mostly doesn't happen. I've never seen a large codebase (which I would work on) which would use such annotations consistently. Yet unsurprisingly, all Kotlin code bases I've seen were in fact null safe.


> Which one is the standard, the dormant JSR-305, the IntelliJ annotations or the checker framework (I guess that's the best candidate since it was actually accepted)?

Hence “de facto”. All of these tools understand all of them.

And I do have to agree with your last paragraph, it is not as good as native support, but nor is it as big of a pain point as it used to be/as some people think it still is.


> but nor is it as big of a pain point as it used to be/as some people think it still is.

It's a big enough pain point that I resigned myself on null-safety in existing Java projects.

I rarely have the opportunity to start projects from scratch, and for those I'd rather use Kotlin.


> I found that defaulting to everything non-nullable (there is a setting in these tools which should be the default, implicit nullable or non-nullable) and providing the few places where you want to use null explicitly gives you effectively the same null safety as Kotlin

Not really, I'd say not even close. In Java you would be trying to avoid nulls whenever possible, but in Kotlin nulls elegantly become a flow control feature.


Yes, but the ergonomics of working with nullable data in Java are generally bad, and you have to do it everywhere


Well, try to minimize the usage of nulls in your programs then :D

For mappings between objects’ deeply nested properties I found mapstruct to be the best tool which will handle it correctly either way.


> Well, try to minimize the usage of nulls in your programs then :D

I am a java guy since 1.1 and it seems to me that a lot of problems in Java come down to it being designed for a time when a application mainly consisted of in-house / in-org / self-written code. In $currentYear, however, we mostly plug libraries into frameworks and it's just a pain to check every call and every return value for the possibilities of null. Yeah, it can be done but I just don't like having to read library source code to find out if it will return a null in some cases.


Open-source libraries were much less common when Java first arrived, but third-party C / Fortran libraries were very common in many domains much before Java arrived.

Nulls, arrays degenerating into pointers (losing size information), and accidentally crossing enum types across libraries were all huge sources of bugs. Java doesn't have much of an excuse for its treatment of nullability, other than they were trying to court C/C++ developers.


I don't have 100% control of my coworkers or library writers. Knowing whether a library or a coworker can hand me back an optional value seems like a pretty basic thing for a language to tell you


I'm surprised that no one is commenting on macros. For me, that is the most exciting feature! It'll make writing code so much more enjoyable and avoid all of the code generation happening at the moment.

But maybe I'm the only one who's bothered with code generation :D


You're not the only one. They actually started work on macros before pattern/records, but for some reason it seems priority changed. I would have much preferred macros came first.


Update: One of the Dart devs says it's because macros are higher risk and harder to get right.


I'd love to see Dart break out of just being the language for Flutter. I built an app in Flutter and Dart was quite fun to learn and work with. Will it ever break out of its shell?


for me personally, it already has.

i have a large set of scripts i use to automate the boring parts of my job. i originally wrote them in bash. past a certain amount of complexity, that became unworkable, so i ported it all to python. after about a year, i found python to be too limiting as well, based on my own coding style, so i am porting it all again, this time to dart.

the dart compiler makes it a cinch to create command-line apps that work on basically anything with a terminal.


Interesting, what made Dart less limiting than Python?


oh man, so many things. i'll choose the most annoying for me personally. let's say i am creating a new dart source file in a dart package, something i have to do quite often. the import section will look like this:

  import "package:crystal/crystal.dart";
super easy. i can add it from memory, or copy an existing module in this package, which will already have that line at the top. it is extremely rare that i ever need to modify this, or add a second line to the import section.

meanwhile, let's say i am adding a new python source file to a package. the import section will end up looking like this:

  import os, base, dart, execute
  from typing import List, Optional
  from .build import build
  from .build_args import BuildArgs
  from .build_type import BuildType, buildTypeList, buildTypeParse
  from .build_type import buildTypeStrings, buildTypeValid
... and pretty much every change to this source file will require modifying the imports section, in ways that i definitely have not memorized, even after years of python programming.

trying to write a large program in python was, for me, death by a thousand paper cuts like this, which don't occur in dart.


Whatever one wants to say about Dart or Flutter, it sounds like the tooling and dev experience compares quite favorably to the more established stacks.


Yup, Dart itself is an ok language (and has improved dramatically in the time I've used it), but the tooling around Dart and Flutter are what make me stay. It's all fairly standard, but complete, and it basically always just works, and the handful of times it doesn't, it's when I've known I'm doing something odd.

The community has lots of great, well built packages, and the Dart and Flutter teams are also quite responsive to the community, and helpful when an issue does arise. I've spent several threads going back and forth with maintainers trying to figure out a bug, and I'm happy to say that every single time we've found a solution.


Have you tried powershell? I use pwsh (powershell cross platform) for CLI. This gives good intellisense and powershell wing object based makes everything better


one of my (many!) objections to using python for automation tasks was what a gigantic pain in the ass it is to transfer a codebase from one computer to another. i once helped a coworker install my python scripts, which took the better part of a day. given that i can now compile my automation app as a single binary, i am in no hurry to add yet another external dependency. and i tend to avoid anything from microsoft whenever possible. i remember when they were doing their dead level best to destroy tech innovation in the eighties and nineties, and i still don’t trust them.


Considering it just started to get any traction by being the language of Flutter, it'll probably take some time. Remember Dart originally came out around the same time as Go. And people are just starting to actually use it for things in production.


That won’t be a kind comment to the project, but I honestly see no reason for yet another language that frankly, brings nothing new to the table. New languages should either be research ones working on something truly novel, or plug into an existing ecosystem because there is just zero point in reinventing everything every time.

I really feel that the way forward is something like Graal running multiple languages in the “one VM to rule them all”.


I love the idea of sound null checks, but doesn't Dart also have unchecked exceptions? To me it seems a bit weird to focus so heavily on type soundness when at any moment a piece of code could throw an unexpected exception that kills your program. I realize these are very different features, but a language is more than the sum of its parts. While I realize every language has some level of unchecked exceptions out of necessity (ie. panics on divide by zero, etc.), they are not meant as general level error checking, but as more of a "can't continue - things are that bad" kind of mechanism.

In summary, if they are that convinced sound null types are needed, to me it needs better error handling that matches the same level of discipline. That said, any and all improvements are always welcome, so I do think overall it is cool they are taking this seriously.


TypeScript has unchecked exceptions too, but null-safety is my #1 motivator for using TypeScript over JavaScript

You're speaking from a place of general philosophy, but in practice (at least in my experience), the great majority of unchecked exceptions come from null references. In the TypeScript codebases I've worked on, the only runtime exceptions we really have to think about come from network requests and (unfortunately) JSON.parse(). You quickly learn to always handle those two cases, and with that we see basically zero unhandled errors in production

Of course we're talking about Dart and I don't know how commonly Dart throws unhandled exceptions or where they might come from, but I think it's a dramatic over-simplification to suggest it's not even worth trying to reduce errors just because unhandled exceptions are technically allowed at the language level


Are there any popular languages which don't have unchecked exceptions? Java, for example, supports both checked and unchecked exceptions. In practice, checked exceptions seem to cause more problems than they solve for large scale software development. If you want to inherit from a class or interface it becomes impossible to add new checked exceptions to method signatures. So developers have to resort to hacks like wrapping the new checked exception in an unchecked exception.


This is a pet peeve of mine, checked exceptions map one-to-one to Result types, and in my opinion the former is strictly better (it automatically unwraps the ok result on successful execution, while makes the error handling as tightly scoped as one wishes (try block, leaving it out and auto-bubbling up) with native language support. Plus it includes proper stack traces which are a must in production systems (as much as some people hate them).

Now don’t get me wrong, Java’s implementation leaves much to be desired, inheritance is not the correct choice for denoting it, but I feel we really didn’t give it a proper choice.


Neither Rust nor Go have exceptions at all, and I would consider them "popular" at thos point. To be fair, they each also make you think more about your error handling (there are multiple valid approaches in each language).


Go and Rust each do have exceptions - in both languages they are called "panics". Both can (usually) be intercepted and recovered.


Go’s error handling is as good as C’s. And that is not a praise.


Rust does have a feature called "panics" that is similar to exceptions in other languages. A panic in Rust is an unrecoverable error that can occur at runtime. Unlike exceptions, however, panics are not caught automatically by the language runtime. Instead, panics are propagated up the call stack until they reach a "catch point", where they can be handled by the programmer.


1. An exception is a recoverable error, that's the entire point, that's what catching an exception is.

2. Unlike Go's, rust's panics do not actually, universally, get "propagated up the call stack until they reach a "catch point", where they can be handled by the programmer". There's a compiler flag which can be "unwind" or "abort". In the former case (the default), panics can be caught and recovered from. In the latter case, the program gets hard-stopped on the spot.


> Unlike Go's, rust's panics do not actually, universally, get "propagated up the call stack until they reach a "catch point", where they can be handled by the programmer". There's a compiler flag which can be "unwind" or "abort". In the former case (the default), panics can be caught and recovered from. In the latter case, the program gets hard-stopped on the spot.

There are Rust libraries (e.g. salsa, used in rust-analyzer itself) that use unwinding internally for non-local control flow and won't work with unwinding disabled.


2. In default Rust config, unhandled panics end up unwinding the stack, calling destructors for everything, freeing resources, closing files and sockets, and printing an error message and possibly a stack trace. I think this qualifies as the language runtime automatically catching the exceptions.

3. For 99% of users, panic = unwind in rust. If you play with compiler flags, C doesn't have undefined behaviour because ubsan will abort programs if you compile with the right flags.


Honestly every language needs Option and Result types (and exhaustive pattern matching to deal with them). They were invented 50 years ago and languages still come out without them.


The thing that is unfortunate is that Result kind of encourages early failure and bubbling up vs actually handling the error and recovering or crashing.

The other thing you don't see much in Result heavy code is coalescing multiple errors (like for example, most parsers in Rust just bail out at the first error when you want all the errors at once). It's possible to write code that does that, and exceptions have the same problem, but it's still a bit cumbersome.

The real frontier of error handling is algebraic effects, or resumable exceptions.


One step at a time. They don't want a Python 2/3 situation.


Why? Swift has never worried about it, and Dart’s target market is a lot closer to Swift.

Swift has made large backwards-incompatible changes 3 times now.


I've been a mobile dev for many years now. Every team I've been on either largely avoided Swift for the first 3-5 years, or was burned hard due to the constant churn.

It's much more stable these days, but personally I'm still wary of the team's (past?) proclivity to make sweeping incompatible changes.


I feel the same way towards SwiftUI, which seems to have an even more rockier upgrade path than Swift.


How many Swift users consider that a good thing though? Swift is on railroad tracks: you have to upgrade on Apple's schedule or eventually you won't be able push your product to the store. Also, 2 to 3 was a nightmare and the Swift team itself is actually pretty chary about doing anything like that again.


Not being able to run project that's using older Swift version on the latest Xcode is a terrible experience tho.


I'm glad to see this here, I like Dart. HN in general doesn't seem to. IMO for cross platform development Dart is the way to go if you are wanting a single code base and targeting desktop/mobile/web. Some of the things this language can do is amazing.

For quick scripts I still reach for python or Linux: bash, Windows: AHK. But if I'm developing an app I now reach for Dart/Flutter.


My dislike for Dart comes from being an early adopter and getting burned by it.

One of their headline libraries, angular dart, pushed out a broken release that stayed broken for nearly a year.

Dart was revived with flutter and I'm glad it's doing better. However, I have trust issues with the way google runs their projects. I simply don't know if they'll continue to support Dart/Flutter or if they'll drop it next year for something shiny.

I also don't appreciate the gaslighting that happened on the dart forums. There was a lot of "Oh, dart isn't dead, we are using it heavily and actively in google!" that happened when dart was very clearly abandoned by google. They pulled the exact same stunt with GWT before replacing it with J2CL.


I don't work for Google (no inside knowledge), but from what I can see, their use of Dart has if anything increased substantially.

Dart/Flutter are highly unlikely to be abandoned.


Sorry, I'm not trying to say that google isn't currently heavily using dart. Rather, that between Dart's initial release (released in 2013) and the release of flutter (released 2017) Dart was practically dead.

That's the period that left a bad taste in my mouth. IDK who chose to use Dart for flutter, but by doing so they revived a basically dead language.


Dart tried to be a Web language through Chrome, but it failed [1]. And I think it was a good call. It felt dead at that point because it was lost. Flutter feels more like a miracle that had no reason to succeed (at first).

[1] https://techcrunch.com/2015/03/25/google-will-not-integrate-...


[I lead Dart eng.]

Sorry you had poor experiences. :-(

That said, it was true that Dart was used heavily internally in that time frame. It was and still is, by Ads (ads.google.com) - that's a non-Flutter Dart app and still (to the best of my knowledge) the largest Dart app around. Before Flutter, we went through a period of primarily prioritizing internal customers. That work was often not visible externally.

We did do the shift to Dart 2 in that time period though. That was a fairly massive change to the language that began independently of Flutter. It was a nice timing that Dart 2 shipped at the same time as Flutter 1.


> That work was often not visible externally.

Let me be blunt, the work was never visible externally. What was visible externally is major language designers leaving the team (Lars Bak), Updates to the language slowing or stopping all together, the chrome team deciding to abandon dartvm in chrome, the angular team deciding to use typescript for angular 2.0, and Dart angular being abandoned (even as it was still being advertised on dart.dev!).

What other conclusion was an outsider to draw other than "Ok, google must be done with dart"?

What was the plan if flutter never happened? Would google ads continue to use dart?

Like I said, happy that you and your team are now enjoying some nice popularity. I jumped on dart early because I thought it was overall a good idea and decent language. However, once bitten, twice shy.


The odds are very good that Lars would have left regardless of how Dart was doing. He has repeatedly told me "I'm a 1.0 guy". He likes shipping new things but he's not the kind of person to spend a decade cultivating an ecosystem and refactoring mature codebases.

I'm really sorry you got burned by the early experience. It was a hard time for Dart users. It was a hard time on the team too. It felt like we were wandering in the wilderness for a while and struggling to agree on what language our users wanted us to build for them.

I think where we've landed is a much better product, but I'm sorry that our churn getting there caused you pain.


> It felt like we were wandering in the wilderness for a while and struggling to agree on what language our users wanted us to build for them.

And I really appreciate you saying this. I get that what I'm saying is harsh and probably comes off as whiny/ungrateful. I'm mostly just venting because of the original "Why are people mad about dart in HN" comment and explaining my position there.

Dart does look like it's moving forward in a good direction and I wish it all the best. It's certainly great that the language and ecosystem are seeing a fair bit more adoption.


I hope that if you try it again some time, you have a better experience.


This is precisely my take, as well. I got burnt by Angular Dart and the Dart VM, so I'm extremely reluctant to try out Flutter. I've only got so much time on this earth and I don't want to keep investing it in Google's abandonware.


Yeah I remember Google's attempt at Dartium and thought that it was dead when that was scrapped (I didn't realise Angular Dart was even a thing).

I was surprised to see Dart 2.0 come out and Dart 3.0 has surprised me even more.


As long as hardly anyone is using Dart outside of Google, it will remain a niche language.

And based on Google's track record, Dart is at risk of being cancelled any day without any notice.


I skipped Dart nearly completely. Early on I just read about it and concluded that it's pretty much 1 to 1 clone of JS just without some weirder parts. Since then JS and TS got massive development so I'm not really even keen on looking at Dart again.


Yeah, I find Dart to be very underrated. It's straightforward and not overly clever while still having decent expressiveness, and the typing and inference work great. Even outside of Flutter I think it has a lot of potential.


Does it simplify the build/ship process compared with React Native? The latter has a good coding experience (and real native UIs), but the housekeeping is a lot to keep up with. But at least some of that is intrinsic to the mobile platforms themselves


Yes absolutely. In fact I switched to Flutter precisely because React Native was so broken. Every time I upgraded, some package broke. Expo is alright but as soon as you try to do something outside of it, it sucked.

Packages didn't break on Flutter, releases are quite stable and preserve backward compatibility, at least generally.


It's a substantial upgrade on pretty much every reasonable metric I can think of compared to RN.


> IMO for cross platform development Dart is the way to go if you are wanting a single code base and targeting desktop/mobile/web.

I can't understand how this can be true when JavaScript/TypeScript exists, but I'm open to religious conversion. Can you share details of the desktop/mobile/web app(s) you've created with Dart?

Another mental block for me is the association with Flutter, since every Flutter app I've tried (iOS and web) feels uncomfortably strange.


Disclaimer: I'm not a developer and could only ever be considered a novice programmer.

A family member was after a bespoke app that would do some calculations for him on site and include pictures and customer details. They wanted it to work on mobile and desktop.

At this point I thought a web or hybrid app would be the way to go. One code base multiple platforms. Cue a month of trying to work out what to use out of an uncountable number of frameworks.

I can't overstate the level of confusion involved it this for a novice like myself! I've created a few simple desktop programs before but nothing on mobile or the web. Where do you start with the current state of things on the web? I still don't know. What I did know was I didn't want to waste time learning a framework that wasn't going to be around, or had just appeared.

At this point I heard of Dart and Flutter which worked across all the platforms I was interested. Not only the it was JIT or AOT when needed, with hot reload.

As for the app, I don't have a GitHub, it's so bespoke to its usecase I don't think anyone would be interested. But also I'd be very embarrassed to release the code. I dread to think how bad it would look to real programmers/developers!


Don't be so hard on yourself. Nearly all of just just wing it even if we sometimes claim otherwise.

Congrats on overcoming hurdles and delivering a product. Not all professional developers always achieve that.


Thank you for the positive comments. Dart wasn't on my radar (I don't do a ton of UI dev) but I'll give it a look next time I want to play around with something UI-ish.


I wish Dart/Flutter had an option for a more convenient - imperative GUI.


If you take a look at how Flutter is implemented, you'll see that at the bottommost layer (dart:ui), it is imperative. On top of that, we built the rendering/painting layers (pretty imperative) and then the widgets layer (reactive).

Each of these layers is public API and follows the same breaking change process as any other part of Flutter's public API, so with a bit of effort an alternative imperative framework could conceivably be built built on top of what's there. The downside is that since the widgets layer is reactive, you'd be giving up all those shiny widgets that are part of the SDK and need to build your own (or find a way to wrap what's in the SDK).

This (old, but still accurate) talk by hixie covers the layers in detail: https://www.youtube.com/watch?v=dkyY9WCGMi0

TL;DR it's entirely possible to create an imperative framework on top of Flutter's lower layers, re-using a lot of our existing code, but (as far as I know) such a thing doesn't exist today.


how are you targetting the web with dart?


Flutter Web: https://flutter.dev/multi-platform/web

You can also use AngularDart if you want but it's less common than Flutter Web in my experience.


Who actually targets the web with Flutter? I wanna know. Flutter is a canvas app on the web. They barely figured out text copy/paste operations and it still looks and feels odd, Java like.


I do, for example. I'm building apps that require mobile versions as well, and I don't want to build the same app three times, or even twice. Yes, React Native exists, but there's not as much codesharing between React web and RN as one would expect. Yes, React Native Web exists, but the whole experience is that of using a patchwork of open source libraries, some not as well maintained as others, to get the same experience as Flutter gives, first class.

Flutter on the other hand, for all three targets, it Just Works™.


Dart is one of those languages that has always felt obscure to me, even though there appears to be a lot of work put into it. What are folks using it for?


It’s used with Flutter, Google’s app development framework. I really enjoy Dart as a language, but it is fairly young and missing a few conveniences (such as data classes). It also can’t compile to dynamic libraries, only executables. Despite that, it is very pleasant if you enjoy managed languages.


I taught myself Dart a couple of years ago, and enjoyed the language, but never found a compelling use for it.

For anything I might do in Dart, there's a better alternative. For backend stuff, I'm going to write in Java or Kotlin. For frontend stuff, I'm going to write in JS or TypeScript. I like Flutter, but Flutter by itself isn't reason enough to adopt Dart.

Long term, if Jetpack for Desktop actually takes off, I don't see any reason to write in anything but Kotlin.


I think there was a fork of Chrome that included Dart as another language runtime alongside JavaScript.


Yep, Dartium. That ship has sailed years ago.


That rings a bell with me. I think is was during the early days of Dart and it was a special Chromium build if memory serves me.


That was Dart 1.0 which I think was weakly typed.


It's used in Flutter exclusively, and since Flutter is hardly used at all, it's a pretty unknown language, and will probably always remain so.

It doesn't help Dart that Kotlin and Swift have been null safe languages for a decade now.


I wouldn’t dismiss Flutter out of hand. It’s more popular even than React Native!

https://stackoverflow.blog/2022/02/21/why-flutter-is-the-mos...


More popular by the metric of...StackOverflow question tags. I'm not sure that I would use that as my preferred metric for language popularity.


Indeed the perfect language/tech stack would be immensely popular and never have any SO questions, as everything would be obvious and easy to figure out without help...


… which is why hairdryers have warnings like “do not use while sleeping”.

I mean…


Not a very meaningful metric.

Take a look at job boards and see how many job postings you find for React compared to Flutter.



Flutter web + Firebase allowed me to create a small web app fairly quickly. Hosted effectively for free so it's been deploy and forget.

Dart is a pleasure to work with, I'm not really a CS guy, but it's like Typescript with real types. Better tooling, no NodeJS legacy crap barely holding itself together.

I don't know if it'll ever unseat React on the Web though. Once an ecosystem has momentum it's hard to disrupt.


I don’t buy that null is a billion dollar mistake.

Just because some dude with a fancy name said so doesn’t mean it’s true.

In particular, how many dollars does it cost to have null safety, and how many more does it cost to use it? I feel like folks quoting Hoare never even bother asking this question.


>In particular, how many dollars does it cost to have null safety

Close to none.

>Just because some dude with a fancy name said so doesn’t mean it’s true.

The dude is responsible for tons of what we take for granted. He is also the one who invented the null pointer, so there's that.

I'd take his expert opinion and experience on the matter over anybody referring to him as "a dude with a fancy name".

He didn't sit to calculate the money in damages from it's use - it was more of a figure of speech and a guess ("probably").

But it's very likely that the prodictivity losses from null errors, the consequences for security, program downtime, and so on, could trivially have costed 1 billion dollar in damages over the years - actually much more.

A single null error in one of the space code responsible from bringing down a NASA mission, could easily cost close to a billion itself.

In fact the memory safety industry (e.g. static analyzers and in memory analyzing tools), safety checks for nulls by consultants, and so on, industry easily has have 1 billion in profits over the last 40 years, so there's that...


Fp arithmetic has brought down space ships. Is floating point a billion dollar mistake?


FP is like a power saw tool you can use to cut wood or cut your fingers.

null pointer is more like a safety belt silencer. Convenient, especially when you don't want to be bothered or listen to the warning "beep". But there's no actual healthy benefit from using it.

That said, languages not providing a first class fast fixed precision/rational/decimal implementation for where precision is required, and this having been relegated to niche libs, is indeed a mistake.


You could reasonably argue that having languages make floating point arithmetic appear superficially similar to integer arithmetic despite key differences (inability to store exact results of various operations etc.) could qualify as such. In many ways it is surprising that more languages haven't been given features to help avoid FP- arithmetic/rounding/conversion/formatting issues - e.g. just adding a built-in decimal type can help a lot (it's woefully common to have decimal amounts such as monetary quantities stored as FP, particularly in Javascript).


I've worked on international trading systems for a Fortune 500 broker-dealer that handled prices as IEEE-754 doubles, even though all currencies involved were decimalized. In the US, stock prices all have to be a multiple of $0.01, but in some markets, this minimum increment/decrement of a price ("tick size") is based on the price range, so lower priced stocks have finer-grained pricing. Determining if the customer was sending in a valid properly rounded price was a pain, as an upstream system had already parsed the price into a double and didn't pass our system the string representation.


when?


> Close to none

so zero dollars were spent adding null safety to languages? False.

Zero dollars were spent converting code to those languages? False.

Zero dollars were spent solving problems uniquely created by not having the benefits of null? Also false.

> I'd take his expert opinion and experience on the matter over anybody referring to him as "a dude with a fancy name".

Sounds like a crappy way to reason about facts. Just because he did cool shit once upon a time doesn’t mean that we should believe him when he pulls numbers out of his ass.


> Zero dollars were spent solving problems uniquely created by not having the benefits of null?

This one doesn't make any sense in the context of the languages being discussed. What unique problems does it create in Swift/Kotlin/Rust?


"close to none" is not the same as "zero". But yes, converting Dart code to null safety is not very difficult.


“Not very difficult” is not the same thing as “close to no cost”


Is this about being obtuse on purpose?

I could also do an immitation of a dumb pedantic person and just answer your "First, zero dollars were spent adding null safety to languages? False", with: "I didn't say "zero" I said "Close to none". So PWNED! or something equally immature.

But one can also chose to be charitable. E.g. to understand that:

> Zero dollars were spent converting code to those languages? False.

Having null safety doesn't necessarily mean "converting". Could also mean not having nulls in a language to begin with. So "how many dollars does it cost to have null safety" in that case is zero. It's fixing the addition of nulls after the fact that can have a cost.

Even so, the conversion to null safety is not that costly (it's nothing like a rewrite, more like going around a program and fixing SQL Injection cases or XSS). It also makes evident many logical and safety errors in the initial program when done (many teams have written about such experience), and any cost has to be offset with the cost of those errors not happening anymore.

In any case, those are not costs of not having null to begin with in the language. They are costs of retrofixing code in ones that did have it.

> Zero dollars were spent solving problems uniquely created by not having the benefits of null? Also false.

There are no "problems uniquely created by not having the benefits of null". In fact, there are no "benefits of null" to begin with.

> Sounds like a crappy way to reason about facts.

Then again sounds like you didn't provide any facts. Just shown ignorance of computer science theory AND history, along with certainty and immature tone.


> Zero dollars were spent solving problems uniquely created by not having the benefits of null? Also false.

Do tell what those problems are? Whenever I hear people complain about this, they suffer from some fundamental misunderstanding of how null-safe languages work in practice


Not to defend the OP because I agree that null is bad. But, in my limited experiecne with Swift I just add a few characters at the right place and the null's are now incorrectly ignored and the warnings go away. So, the language gave me warning, but it didn't really force me to deal with it correctly. Is Swift a bad example of a null-safe language?


What characters exactly? "!"?

You literally tell it to break, a null-safe language will not actually prevent you from shooting your foot, it'll just make sure you do so knowingly. "foo!" is essentially a shortcut for

    guard foo else {
        fatalError()
    }


> I just add a few characters at the right place and the null's are now incorrectly ignored and the warnings go away. So, the language gave me warning, but it didn't really force me to deal with it correctly.

Personally I don't really see that as an issue, we're talking about null safe by default vs not safe by default. It being simple to break the null-safety is a good thing, as long as it requires you to do it deliberately


It's a language that didn't go all the way to null-safety.

But at least it did explicit (you HAD to add those characters) to get the unsafety than in C is the default.


> It's a language that didn't go all the way to null-safety.

Is it? I can’t think of any langage but Elm which doesn’t have that assertion. Though I guess e.g. Idris or ATS could have left out as well.


I don't think Hoare said that null was his mistake. I believe he said that the null reference was his mistake. In other words, null is not a reference, and treating null as a reference in all contexts conflates optionality and referentiality. The problem is that referentiality and optionality should be orthogonal concepts in type systems.

Nullability/optionality is clearly useful. References/pointers are clearly useful. The problem is making them a single concept in a type system, thus requiring all references/pointers to be nullable and making it very awkward (such as forcing the user to manually declare a wrapper type holding a boolean and the wrapped value) to have optional values without making them references.

The conflation of referentiality and optionality in the most popular static type systems also bleeds over into mental shortcuts used by many programmers in dynamically typed languages when thinking about the types of parameters expected by functions/methods. Had Algol W, C, etc. kept optionality and referentiality separate in their type systems, I think Python/Ruby/Lisp etc. programmers would likely think more carefully when passing None into fuctions/methods.


Lisp doesn't have None; it has nil. The Lisp nil was not inspired by the conflation of referentiality and optionality in popular static type systems. It predates them.


The conjecture about dynamically typed languages was regarding the cultural effect of the popularity non-null-safe static type systems on the current practice of coding in dynamic programming languages. It wasn't a conjecture about the structure of the type systems in those dynamically typed languages.


I find that even less plausible. I don't suspect that the null reference practices in, say Java, have any influence on the way people work in Lisp.

Optional parameters in Lisp take on a value of nil when the argument is omitted (unless a different default is specified). Various search functions return nil when an item is not found: find, member, gethash, ...

The practices around these conventions may, at times, resemble work in static languages with null references, but there is no cause-and-effect there.


After using languages with the Option type [0] and exhaustive pattern matching [1] (OCaml was my first exposure), I can definitely say that bare null pointers, without using the prior two things I mentioned, are a mistake.

It's not that null as a concept is a mistake, since a Option type has both Some and None, it's that in most mainstream languages, people have to implicitly deal with them rather than having the compiler checking it for them explicitly. And if the computer can check our work for us, why do we have to do it ourselves?

That's why it leads to mistakes and is why it's called a billion dollar mistake, because I'm sure at least 1 billion (possibly even 1 trillion) dollars worth of manpower, lost revenue and time have been spent dealing with nulls.

[0] https://en.wikipedia.org/wiki/Option_type

[1] https://en.wikipedia.org/wiki/Standard_ML#Pattern_matching


> how many more does it cost to use it?

The null-safe language I've used the most is Rust. It is far from the hardest part of the language to learn, but the biggest impact day-to-day is that when constructing a new object, initial values for all the non-nullable fields have to be provided all-at-once. You can't have a constructor which is passed a `this` object full of nulls then fill in all the values later, instead the object is constructed whole in a single operation.

Otherwise the explicit distinction between T and Maybe T just makes life easier.


Ironically you can do that in dart. If you declare your member late and never initialise it you can run into what amount to a null reference exception.


null itself isn’t the billion dollar mistake, it’s that literally every object can be T|null. Undefined or null is actually quite useful, as long as you’re expecting it.


That’s not necessarily a mistake.

Claiming that a reference cannot be null means that if you later realize it has to be, you’ve got a potentially compat-breaking refactoring to make.

So, nullable-by-default is either a good idea or a bad idea depending on lots of complex reasons.


That's the exact same issue you run into if you suddenly find out that you need to use a different type (because.. Yeah, that's exactly what it means). That doesn't mean you should default to writing all your code without types


Seems like an easy fix. At least in languages like TypeScript:

https://www.typescriptlang.org/tsconfig#strictNullChecks


Yes, null safety is a very simple change to type systems and type checkers.

Unfortunately, there's a lot of cultural momentum in this regard.


Null safety means you can't dereference a null pointer. Therefore something like `T | null` is not possible, because `null` doesn't exist. What you'd have instead is an Option or Maybe type and a compiler that requires exhaustive handling of it.


In the informal context of GP’s comment, null is a singleton type whose sole value is null.

That exact syntax (modulo langage divergences) is one of the options for “nullable” in Python: https://docs.python.org/3/library/typing.html#typing.Optiona...


Which can still lead to unrecoverable runtime errors, that could otherwise be compilation errors.


What are you talking about?


I'm talking about the fact that nullable types do not on their own guarantee null safety and that these two things are not equivalent.


That's about as unclear as your previous comment, at best. At worst, it's just wrong.

The typechecker would only allow on the union operations which are allowed on both types, anything beyond that would first have to use type checks or assertions in order to split the union.

So, again, what are you talking about?


That's actually a good point. I made assumptions about the meaning of the term `null` that can't be true in every language that uses it.

There are some exceptions to the behaviour you describe, like for instance with C# which for the longest time only allowed value types to be annotated as nullable, and only very recently extended this to reference types, and only as an opt-in feature, and the type checker only throws warnings, etc. That would be a case of a language which is not "null safe" but provides faculties for accomplishing that.


Here's my take: yes, it was a billion dollar mistake. But it wasn't null on its own that was the mistake, it was making it impossible to opt out of. It's incalculable how much that breaks pretty much all of the mental modelling you do about systems. Null safety is the only way to get back to where we want to be.


I believe that's why Hoare said that the null reference, not null, was his billion dollar mistake.

It was a simple one-liner in his type checker "if getting a type error in a cast, if the source type is nulltype and the target type is any kind of reference, always let the cast happen". His (biased by hindsight) recollection is that he even felt it was a bit dirty at the time, but made a lot of code shorter.


I always thought of it as a humblebrag. Like, I made this billion dollar mistake while giving birth to a trillion dollar industry.


You’re not wrong.


He was wrong on this but just in the amount. A billion dollars isn't that much these days.


Billion dollar mistake? No

But anecdotally I've experienced many null-pointer runtime errors, to the point where I very strongly believe having strict or even half-decent null checking reduces the amount of runtime errors my code produces non-negligibly. I'm sure not everyone has the same experience, but I strongly prefer languages with null safety (AKA marking a type "non-null", and then the compiler makes a good effort to ensure it's not null).


At 100k per year, 1B is a mere 10,000 man-years or with 4.4M devs in the US, that's less than one work day per currently-working dev across their entire career.

I'm sure that I've spent weeks worth of time doing this in one form or another and I doubt that I'm alone here. By that metric, it's more like the 100B mistake.


Well said!


You are correct.

The billion dollar mistake was not null, it's languages that have a null value but don't support nullability in their type system.

Kotlin and Swift get this right.


There is a cost to pay up-front in having null safety in your codebase, but the savings come back over time. Can we easily measure whether or not it truly makes a difference? Maybe not, but I'm jaded enough to gladly pay that cost, knowing I can think about whether or not something is null once, versus having to forever wonder if the complicated code I need to interact with is going to bite me later.


I agree. I write Java for many years and never had I any particular issue with nulls. I used Kotlin for a while and null-safety is a compromise which often leads to verbose and unnecessary code.

I don't have any particular side on this battle, I can use both approaches. I just don't think that it's a big deal.


Yeah if anything it's way more than that.


I've come to the conclusion that in the generic business software world, removing null would be a trillion dollar mistake. Instead of applications simply crashing, millions of hours would be wasted trying to clean up bogus values (e.g. "changeme", empty string, etc) from databases and calculate what should have been there instead.


Nobody is removing null because null itself is not the mistake. The mistake was to say like in Java that null is a valid value for every possible class under the sun. What Kotlin & cie are saying is that if your parameter is a RandomGenerator, null should not be allowed there because null is not a RandomGenerator. Null is null and that's it. If you search something in a map, your value will be of type "String?", that is "String | null" and that's completely fine, the compiler will assist you with that too.


I'd even question whether a function whose job it is to "find" something should be able to return a Nullable value - or at least, it shouldn't be possible to just force such a value to be treated as an actual value (as is allowed by Swift/Kotlin/C# etc with "force unwrap" type operators). I'd rather see a language with built in support for functions that can either return one type or another, whereby the caller must provide two code paths to deal with each. And yes this would imply you could never simply call such a function and store its return value in a variable.


That was the initial intent behind Kotlin's Result<T> type; it was only allowed to be a return value from compiler-blessed functions (runCatching and friends) and was not allowed to be stored in a property.

The problem became this additional function color; also, no language construct can prevent users from implementing "fun Result<T>.unwrap(): T" and crashing the program in the failure case.

OK, you could also restrict functions that accept Result<T> (as a receiver or parameter), but now you force resolution of that type by the direct caller of the function returning Result<T>. So really this becomes a worse version of checked exceptions, which (thankfully) don't exist in Kotlin.


I think you misunderstand what null safety really means. It’s not that values can’t be null/None/Empty. It’s just that it’s made explicit where this is the case and where it’s not the case.

Without null safety everything is implicitly maybe null.


In other words, the problem is that null is treated as if its class is a subclass of all classes. All of a sudden, you've introduced a superset of all of the problems with multiple inheritance. (Nulltype inherits from everything!) You've also made it nearly impossible to adhere to the Liskov substitution principle (unless none of your classes can do anything, making all object instances equivalent at runtime).

In type theory, a type that is a supertype of all types is called a "top type" and a type that is a subtype of all types is a "bottom type". It's okay to have one or more bottom types in a type system, as long as it's not possible to create instances of them. Unfortunately, in most of the popular statically typed languages, null is an instance of a bottom type. One reasonable use of a bottom type would be to allow a variable to assume one of many types, and require a dynamically-checked upcast when actually passing that variable to a function that cared about its types.


do you see this a lot in industry codebases written in languages that have null safety?


The more I think about it, the more it sounds like hubris. If Tony Hoare hadn't made that design choice then somebody else somewhere would have.

I think the sentiment behind the quote and its widespread use is correct however. It definitely does seem like the tradeoffs of null safety are worth it in the vast majority of cases. Most of us who use GC languages with massive runtimes sacrifice a lot more performance for a lot less value.


> If Tony Hoare hadn't made that design choice then somebody else somewhere would have.

Not necessarily. If Hoare had not used null references he might have invented nullables to fill the need and others would have replicated that.


I am supposing that the further back in time you go, the more likely it is that simplicity and performance would influence language and compiler design. As opposed to developer experience.

I think this can be supported by the fact that these fancy language features have been successfully implemented for decades and are only now showing up in mainstream languages.


> the more likely it is that simplicity and performance would influence language and compiler design.

Neither "simplicity" nor "performance" were huge factors in Algol's design.


He invented it for algol, a language with sufficient complexity that it was basically impossible to implement. The process of creating c involved stripping off everything but the most necessary features. Null references made it, nullable types likely wouldn't have.


> The process of creating c involved stripping off everything but the most necessary features. Null references made it, nullable types likely wouldn't have.

Nonsense.

The entire point of Hoare's design was to have type safe references, which C pointers most definitely are not. The entire point of references was that they not be pointers.


I enjoy Dart a lot currently, but I can only imagine it being used on the client-side for building an ui, nothing else. It even advertises itself like so.

To me (with my limited knowledge on Dart), it feels like Dart is lacking a lot if it wants to work as a systems language.


The main sass implementation is in dart https://github.com/sass/dart-sass, which I think is dumb because it's a fair amount slower than the old libsass written in C.

The reason they spouted was due to "Low developer resources" due to the difficulty of C, so they swapped to a language pretty much only used by Googlers. Google now has control of sass, which then gives them greater influence (even greater than just having browser majority) over CSS Spec choices (see CSS Nesting spec).


I would argue that if someone was interested in working on sass there's a far lower entry barrier to learning enough dart to do so than getting good enough at C to work with the old libsass.


When I first got into Kotlin I thought that null safety was a matter of getting rid of null, like some try to do in their Java code.

Now I know that actually it gives me liberty to safely use null wherever I want and not have to worry about it blowing up in my face later.

The keys to this is that the language makes it obvious what can be null "Type?" and by giving me ergonomic tools to handle nulls such as ? and :? and compiler non-null inference. I wonder how this pattern of

make X visible and give ergonomic tools to handle X

could be applied to improve other aspects of programming.


What is the state of dart for cross-platform development? I looked a few years ago and it seemed like progress was being made for web development and desktop use, if I remember correctly.


Web development is not really going well (imho), but desktop I find good enough to at least prototype with. Wether I can make my mind up if I like Dart or not, Flutter really feels right to me.


[I'm the eng lead for Dart.]

Dart itself is used heavily on the Web by Google. E.g., ads.google.com is a Dart web app - but not a Flutter one. It uses a Dart version of the Angular framework. It's probably the largest Dart app in existence today.

Flutter on the Web is less mature, but (IMO) making good progress.


Isn't AngularDart "deprecated" now? I say "deprecated" in quotes because the repository has been archived [0] and it says it's been moved to a community project, which I'd think is a little strange if Google has a large (and I'd assume important due to it being a revenue generating site, ie ads) codebase but then archives the project.

Big fan of Flutter though, what's next for Flutter on the web, anything interesting?

[0] https://github.com/angulardart


Regarding AngularDart, see:

https://medium.com/dartlang/angulardart-flutter-and-the-web-...

TL;DR - It's still heavily used inside of Google, but the team decided to stop maintaining the open source version of it. It was a fair bit of work to do both - different build rules, different tests / test infra, different priorities. Effectively, AngularDart has been forked. There is an internal-only version that is actively developed, and there is an external community project.

Regarding Flutter on the Web, there is a lot of active work, but I'm not the best to speak to all of it. On the Dart side, it's one of the major reasons we're investing in things like compilation to Wasm.


Ah I see, so Google doesn't use the open source version at all. I thought it was like Flutter or Angular where the OSS version was used but it's different in this case.

Just had another question, I actually just filed a feature request for Dart (based on reading this thread about Option and Result types) about whether Dart has a Result type [0]. Looks like it does in Flutter's async module, but I wasn't sure why that wasn't also brought to the rest of the language.

And for WASM, I thought garbage collection for WASM wasn't stabilized yet, will Dart have to wait until then? What benefits does WASM provide for Flutter that's not already covered by how it does web support anyway, ie drawing inside a canvas?

[0] https://github.com/dart-lang/language/issues/2697


For Wasm, yes, we're targeting the new garbage collection support using the experimental implementation in Chrome. We think this has the potential to help with performance.


That's excellent to hear! I did not know (my lack of research in that regard obviously) that Dart was used that much outside Flutter! I really like Flutter and see them as married, but I'm obviously wrong. Thanks for the explanation.


are you using flutter just for the vanilla use case of a mobile app?


Mobile + Desktop. But these are LoB apps. I just like it better than React(Native). The results are really robust and easy to debug if something wrong. Also when I do an update, unlike React, not suddenly everything is broken.


The most important thing to remember is that Dart doesn't support shared memory concurrency in the traditional sense. It's modeled after JavaScript's WebWorkers (message passing).


I'm genuinely curious, and I don't know much about Dart and haven't used it. The comparisons to Kotlin are interesting, especially sealed classes and pattern matching. For this example from the post, is it not possible use an extension method (common practice in Kotlin I've seen and written)?

double calculateArea(Shape shape) => switch (shape) { Square(length: var l) => l * l, Circle(radius: var r) => math.pi * r * r };

Aka:

extension Area on Shape { double calculateArea() { ... } }


Extension methods (which Dart also supports) are orthogonal to the pattern matchinng going on here. If you wanted to define that example function as an extension method, you could:

    extension on Shape {
      double calculateArea() => switch (this) {
        Square(length: var l) => l * l,
        Circle(radius: var r) => math.pi * r * r
      };
    }

    example(Shape someShape) {
      print(someShape.calculateArea();
    }
Extension methods are statically dispatched, so they don't give you any way to write code that's polymorphic over the various subtypes. In order to have code specific to each subtype you need something like virtual methods (which requires you to be able to add methods directly to those classes) or type switches (as in the pattern matching like we have here).


Thanks. I'm just wondering why the example didn't show this extension method. It seems more idiomatic to just call it on the receiver instead of passing a parameter.


It's only more idiomatic if you're coming at this from an object-oriented perspective where anything "dispatchy" should be in receiver position. But once you let go of that notion, it's entirely to just have a function that does a type switch.


I am fine with applying either OO / FP approaches. This example is a sealed class, it is just my opinion that the OO version reads better. And the added benefit of having the function namespaced / bound to the class for IDE support (autocomplete) and documentation.


These sound like amazing advancements. Now they just need to add a builtin Option<T> to all the things.


I really like option types in languages that were designed around them, but given Dart's history as an imperative C-ish language with millions and millions of lines of code out there, I think nullable types are a better fit for the way users think about and work with the language.

It's really hard to do both approaches. (The fact that Java has an Option<T> type, but all references to it are also nullable, is sometimes a source of chagrin and sometimes hilarity to me.) If you want a nice user experience, consistency across the ecosystem is huge.

So when we decided what to do about statically checking for null reference errors, we went with nullable types (the same as Kotlin, TypeScript, and C# do) instead of option types (Swift, Rust, etc.). I wrote a long thing about my thinking on it here if you're curious:

https://medium.com/dartlang/why-nullable-types-7dd93c28c87a

It's a real trade-off. There aren't perfect solutions. Option types have some nice composability properties, but they tend to be more verbose and don't play as nice with the imperative control-flow heavy code that's idiomatic in C-derived languages.


It seems there's often a need for languages to evolve towards sanity rather than being sane out of the gate.


I missed the part where language design is a solved problem and everyone is able to create a perfect programming language on their first try. Do you want every language to just be slightly different Rust?


In that respect it's more about ML/SML from the 1970-80s.

Even putting null-safety aside, the latter part of the post with destructuring tuples and pattern-matching shapes looks almost 1:1 like SML code!


I'm the main designer of the patterns stuff we're working on in Dart. Yes, SML is absolutely one of the primary inspirations. The design also takes a lot of inspiration to how Swift and C# approach pattern matching and destructuring. There are so many good languages out there to learn from.


> Do you want every language to just be slightly different Rust?

Kinda.

The trap with putting out 'crap' and fixing it later is that language designers will inevitably try to preserve backward compatibility, meaning the crap has to stay in forever.

Putting null and null-safety into a new language is like putting venereal disease and condoms into a new language. People will spill paragraphs on whether or not condoms are worth it, and whether VD is a problem if you have a sufficiently advanced IDE. But why put the VD into the language in the first place?


A language without null safety in the design in first place isn't worth my time at all. It's like a "business language" , released with deadline. I've suffered enough burnout and stress for those kinds of languages already. It's enough.


Some claim null is a mistake. I will counter and say that null not being exportable within the type system was the real big mistake.

Languages where you can express if something can be null or not are a joy to work with and remove a whole class of errors.

That being said, I still don’t really understand the value proposition for Dart. Through different versions it’s morphed in the classic solution-looking-for-a-problem way (eg the optional typing). Did we really need this when Java, JavaScript, Python and Go existed?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: