Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Funny how the already weak case for not working on Julia instead of creating a new language is becoming even more flimsy :

FAQ:

> Why not make Julia better? > We think Julia is a great language and it has a wonderful community, but Mojo is completely different. While Julia and Mojo might share some goals and look similar as an easy-to-use and high-performance alternative to Python, we’re taking a completely different approach to building Mojo. Notably, Mojo is Python-first and doesn't require existing Python developers to learn a new syntax.

https://docs.modular.com/mojo/faq/#why-not-make-julia-better

Now :

>We oversold Mojo as a Python superset too early and realized that we should focus on what Mojo can do for people TODAY, not what it will grow into. As such, we currently explain Mojo as a language that's great for making stuff go fast on CPUs and GPUs.



Even funnier: https://github.com/tensorflow/swift/blob/main/docs/WhySwiftF...

> Julia: Julia is another great language with an open and active community. They are currently investing in machine learning techniques, and even have good interoperability with Python APIs.


Yeah that was a missed opportunity fore sure, joining forces 8 years ago would probably had yield better results that the current situation.


The share of projects that start out “we’re going to do X, and conpletely support using the syntax and semantics of existing language Y, but add more on top” that end up “we’re going to do X, in a language vaguely reminiscent of Y” is very hard to distinguish from 100%.


I guess there's survivorship bias, but the ones that first come to mind fared pretty well with compatibility (eg C++, Typescript, Ocaml)


I think the real reason is that Chris Lattner doesn’t want to work on Julia. He’s likely very (and justifiably so) opinionated about these topics, and probably wants a certain degree of creative authority over his projects.


Why deal with others when you are big enough of a name to launch your own and make all the decisions yourself?


Although from the way some Modular videos are done, it seems nowadays he is more in an overseer role and setting up the direction, while others are the ones actually pushing Mojo and Max tooling forward.

Maybe I just misunderstand it from the presentation format.


He works on Mojo a lot, mostly on weekends. In the past few months, he has worked on strings, collection literals, dependent types, reference captures, comprehensions, and many other nice language features.


Additionally Julia works on Windows, lots of its issues have been ironed out throught the last decade, and many folks are invested into it, instead of a single company product.

Yes many of the mainstream languages started as single company product, but lets put it this way, would anyone be writing one of such languages today, had those not been languages gatekeeped to access a specific platform?

So outside accessing Max and its value preposition as product enabler for XYZ, who would be rushing to write Mojo code, instead of something else.


I think the modular proposition is to solve the AI infra problem using a modern systems programming language but with a python syntax (to keep the mental overhead low) it is mainly trying to replace C++/CUDA. I am not sure Julia is suitable for such endeavour. Julia is not marketed as a systems programming language and I find it hard to believe that it can be one as dyamically typed langauge with GC and JIT. Could Julia replace python in the high-level dynamic code space used in research and training? maybe. but I find it really hard to believe that it can replace CUDA/ROCm/C++ .. etc.


Lisp Machines from Xerox PARC, Texas Instruments and Genera proved their point, even if the market wasn't up to their price point versus UNIX graphical workstations.

Dylan was going to be Newton's system programming language, and while the language group lost the the C++ team (Apple had two competing teams for the Newton OS), it was still NewtonScript for everything userspace, and it was getting a JIT by the time the project was canceled.

Objective-C is dynamically typed beyond the common subset with C, and was used even to write NeXTSTEP drivers.

I don't know how much of a chance Julia has against CUDA/ROCm/C++, especially now that everyone on the GPU space has decided to give feature parity to Python on their hardware, via day one bindings to the compute libraries and JIT DSLs, so that makes Mojo even less of a chance than Julia has.

Julia has an established ecosystem, and presence on the scientific community with ties to MIT.

Python is the champion, and most folks writing CUDA/ROCm/C++ are already using it.

So who would be reaching out to Mojo, instead of Python JIT DSLs/bindings or Julia, when having Fortran, C, C++ allergy?


Creating DSL/Bindings for Julia or python to underlaying platform like CUDA is not really replacing it, it is yet adding another layer on top of the existing platforms or creating a prototyping envirnoment for research. The question is not can Julia interface with CUDA, it is can Julia replace C++/CUDA/ROCm in an end-to-end scenarios (preferably with portability across GPUs vendors)? if not then there is no comparison between the stated goals of Julia and Mojo. they are different langauges targeting completly different use cases.


Julia does not just have bindings to CUDA. Native Julia code can compile to build .ptx kernels https://cuda.juliagpu.org/stable/development/kernel/. This same code can also generate kernels for AMD GPUs, Intel GPUs, and Metal.

We for example built software that generates kernels on-demand that embed user functions for all 4 of these systems and showed it's much faster than just CUDA bindings for array functions for certain nonlinear systems (https://www.sciencedirect.com/science/article/abs/pii/S00457...)


Julia is much more like a static language than you might realize. In fact, within a fixed world-age, as far as julia's JIT is concerned, it is a static language. Our JIT also isn't like other JITs, we sometimes call it a "Just Ahead Of Time" compiler because it is built much more like a traditional compiler than a tracing JIT.

We have quite fantastic GPU compilation stuff too, and julia functions can be compiled to Nvidia, AMD, Intel, and Apple GPUs through their respective GPU compiler packages, and one can use KernelAbstractions.jl to write code that is GPU vendor agnostic and works on all of them.

We're also getting an (experimental) fully ahead-of-time compiler built into the language with v1.12 that spits out an executable or dylib.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: