Preferences


I love how easy it is to view the machine code for a function. Julia actually ships with a handful of these introspection macros: https://docs.julialang.org/en/v1/devdocs/reflection/#Interme.... They are invaluable when it comes to optimization.
Indeed, inspecting typed code, lowered code, llvm ir, and native code from the REPL is great. Combine that with @benchmark for microbenchmarking and it's amazing.

Note that the 'manual dispatch' is optimized out in a very early stage:

    julia> @code_typed foo(1)
    CodeInfo(
    1 ─     return 2
    ) => Int64
So, no need to look at assembly here.
Prof. Alan Edelman gave a nice lecture about this a couple of days ago https://www.youtube.com/watch?v=IuOXXQR7dAo
I keep saying that Julia is the future. No more tinkering with Cython and other hacks.
Have to agree, just recently started playing with it and was blown away. If web side gets a bit more polish it will become a killer lang for web development.
https://genieframework.com/ (A Django like MVC framework, with templating and a julia -> JS compiler for the frontend)

https://www.youtube.com/watch?v=xPUOCQ2SJF0&list=PLP8iPy9hna... (JuliaCon 2020 | Write a WebApp in Julia with Dashboards.jl |)

youtube.com/watch?v=uLhXgt_gKJc&list=PLP8iPy9hna6Tl2UHTrm4jnIYrLkIcAROR&index=10&t=11428s (JuliaCon 2020 | Building Microservices and Applications in Julia)

https://www.youtube.com/watch?v=8sciqIMXBng&list=PLP8iPy9hna... (JuliaCon 2020 | Interactive data dashboards with Julia and Stipple | Adrian Salceanu)

“a julia -> JS compiler for the frontend”

Really? Where? I’d love to know about that.

Oh, yes, I knew about that, thanks; it sounded as if there might be something further along.
There are def. projects that show promise.
Same here, unless Python gets more serious with a similar JIT adoption story.

As for the ML stuff, I get the same libraries (written in C++ anyway) from other languages bindings.

Slapping a JIT on Python can't and won't solve it's probblems. Python's problem is that it's semantics make many very important optimizations impossible.

Julia's language semantics were specifically designed to be friendly to JIT optimization. This is something that language devs need to be thinking about very early in the design process or it's hopeless.

The usual argument, yet it works quite well in Languages like Lisp and Smalltalk, both much more dynamic that Python.

At every moment the image can completely change shape, at any given breakpoint I can change anything and then resume execution at a previous point and so on.

I can't say anything about smalltalk since I only played with the VM once and don't know the whole story, but I can think of two reasons why Lisp even though they are more dynamic than Python (for some definition of dynamic) it's easier to JIT.

First is compile-time (macro) and runtime separation, which allows an escape path to move the dynamism out of runtime, so the most powerful transformations doesn't even concern the JIT optimizations (outside of running the macros exactly as described). That's in contrast to R, which is inspired by Lisp like Julia but uses optional lazy evaluation to implement a lot of what a macro can, but it's harder to optimize since the JIT can't just separate and then optimize. And in Python, metaprogramming also requires exploiting heavily runtime tricks (sometimes even considering implementation details of CPython) that then must be properly supported and efficiently executed by any JIT.

Second is cultural, processing power was not abundant when most Lisp dialects were created and became popular, so the ecosystem grows more aware of what dynamism is good and what is bad. Julia is another example, Julia developers since it's release are much more performance aware, so they don't deliberately abuse "Any" containers or type unstable code, eval, runtime function redefinitions and invokelatest, global variables, some types of introspection and other anti-patterns that any JIT built after the fact (decades of libraries later) would have incredible trouble making fast. And in Python, the performance aware people always focused on the FFI instead of restriction the harder to optimize parts of the language, which only made it harder for a Python-only performance solution that can handle this polyglot ecosystem.

It's not an exaggeration to say that billions of dollars and untold man-hours from some of the smartest programmers alive have been spent on trying to make Python faster.

If it was as easy as in Lisp or Smalltalk, don't you think people would have succeeded by now?

Another example worth considering: Javascript is yet another langauge that is arguably more dynamic than Python, yet is rather easy to JIT. What's going on there?

It comes down to the fact that it's not really about the amount of dynamism in a language, but the kinds of dynamism that is really relevant for implementing a JIT compiler.

Politics also play a role.
Personally, (despite, or perhaps due to) Python once being my favorite language, I can't wait. Dynamic typing is too much of a mental strain once you are handed someone else's ML algo code and have to integrate it under tight deadlines.

Though I recently discovered opentelemetry and I'm working on a set of decorators I can use to instrument these monstrosities and figure out what all these rando 5 letter undocumented variables do.

Just FYI, Julia is dynamically typed too. However, I think it is also an existence proof that most of the things people complain about in dynamic languages aren't actually inherent to being a dynamic language.
To be fair though, Julia's types play a much more forward role than your usual dynamically typed language.

Whilst there isn't a compiler enforcing strictness, it is idiomatic and encouraged to write type-stable code wherever possible.

This will only work if they get sufficient traction from a community. And they still miss some basics, such as Qt bindings.
The Julia community is large and very active. Julia is also rapidly ascending in various language rankings (19 on IEEE Spectrum, 28 on Tiobe).

It's sort of a random ask, but Qt bindings do exist:

https://juliahub.com/ui/Packages/Qt_jll/s7blD/5.15.0+3

along with a higher level QML wrapper:

https://juliahub.com/ui/Packages/QML/JLkMo/0.6.0

Weird, of all the basics I can think of I wouldn't put Qt bindings in the top 10.
I think that person lives in a Qt bubble.
Qt bindings are a "basic thing" for a language? Why?
For some strange reason I always know when a Qt application was written in Python instead of QML/C++.
If you consider Qt bindings "basics" then we have very different opinions on what the word basic means.
This reminds me of these slides on the Julia compiler and Zygote [1] that really impressed me at the time. Not only the compiler is incredibly smart, it also gives the programmer to a lot of tools to talk with it (the introspection macros, IR Tools, macros on AST, macros on IR - generated functions).

And maybe a little related, while traits are not a feature on the language, you can use this kind of compile-time optimizations to implement trait dispatch that the compiler will completely erase from the generated code, the so called holy traits [2].

[1] https://docs.google.com/presentation/d/1IiBLVU5Pj-48vzEMnuYE...

[2] https://invenia.github.io/blog/2019/11/06/julialang-features...

This is a good feature! Sometimes, you want to use `if` `else` based on type information, say if your function is very large and the different behavior based off of type is only a small piece of the process you want to convey.

It's great to have the flexibility.

> This is a good feature! Sometimes, you want to use `if` `else` based on type information, say if your function is very large and the different behavior based off of type is only a small piece of the process you want to convey.

Super interesting point, thank you.

I am not a Julia user, can the compiler optimize away the run-time type checking when `foo`'s argument is not easily known at compile time?

e.g.,

    foo(function_which_returns_dynamic_type())
Yes, this is another reason to prefer julia's dispatch over manual dispatch. In this case the cost of the dynamic type being returned is just a _single_ dynamic dispatch. Dynamic dispach means that the correct function foo will be looked up in julia's method table. But when this foo is called, all types are inferred, and it will run optimized for these types.

This is called a function barrier.

A function barrier can be used as an optimization too: if you have a large function where one of the local variables has a type that is abstract / not concrete / not properly inferred by the compiler (e.g. `::Union{A,B,C}`, `::AbstractVector`, `::Any`), it can be beneficial to split the function into two functions, such that the latter function can be optimized for concrete types, and the former function only has the overhead of a single dynamic dispatch.

I think you may have some misconceptions. Julia is not compiled ahead of time. A function does not spring into existence until the definition of the function is executed. Same applies to a struct definition.

In a statically typed language you might write a function that takes an integer like this `foo(x::Int)` but in Julia it is completely possible to replace the `Int` with an expression that returns a type object. Meaning you actually run code at runtime to figure out what the defined function should actually operate on.

So there is nothing known at compile time, because everything happens at runtime. However when a function is called for the first time, then the JIT compiler will compile it and store the result. But it will store different compiled versions of the same function depending on the type of the argument it was called with. If you call the same function later with the same type of argument, it will reuse a previously compiled version.

If this is all very unclear. You could look at an article I wrote contrasting static typing with how typing works in Julia: https://medium.com/@Jernfrost/types-in-c-c-and-julia-ce0fcbe...

> A function does not spring into existence until the definition of the function is executed.

Methods and their type signatures do exist within the runtime as soon as the function expression is evaluated. To simplify the example from your article

  julia> a = 20
  20

  julia> foo(x::(a < 10 ? Int : Float64)) = "thing"
  foo (generic function with 1 method)

  julia> methods(foo)
  # 1 method for generic function "foo":
  [1] foo(x::Float64) in Main at REPL[2]:1
Here you see that one method of the function foo has been defined, and the code in the type signature has already run, resulting in a method which takes x::Float64. After this point, changing the value of `a` will not change anything about foo.
> when `foo`'s argument is not easily known at compile time?

There's no such thing. In Julia, all types are computed at compile time.

The catch is types such as `Union{Int, Float64}`. If fwrdt returns that, the compiler will generate a specialized method for `foo(::Union{Int,Float64})`. But `Union{Int,Float64} isa Int` can not be evaluated at compile time, so that specialized method will have run time type checks.

Julia provides some static analysis tools to detect this, and help you locate the `1` in fwrdt which should be a `1.0`.

While it's true that all code is typed (e.g. the compiler knows what types to expect at different points throughout the program), it's completely possible to have essentially "worthless" types computed. E.g. if I have the following:

    foo(x::Int) = x + 1
    foo(x::Float64) = x + 2
    foo(x::String) = "$(x) $(x)"

    function eval_user_input()
        user_input = readline(stdin)
        user_obj = eval(Meta.parse(user_input))
        return foo(user_obj)
    end

Then clearly there is no way the compiler can know what the type of `user_obj` is, however it will do its best to infer the entire function. We can use `@code_warntype` to quickly and easily find places where Julia's type inference gives a suboptimal result (which is often a source of performance issues, since there will be runtime overhead in those cases).

    julia> @code_warntype eval_user_input()
    Variables
      #self#::Core.Compiler.Const(eval_user_input, false)
      user_input::String
      user_obj::Any

    Body::Union{Float64, Int64, String}
    1 ─      (user_input = Main.readline(Main.stdin))
    │   %2 = Base.Meta.parse::Core.Compiler.Const(Base.Meta.parse, false)
    │   %3 = (%2)(user_input)::Any
    │        (user_obj = Main.eval(%3))
    │   %5 = Main.foo(user_obj)::Union{Float64, Int64, String}
    └──      return %5

You can see that `user_obj` is annotated as type `Any`, and that invoking `foo()` is said to itself return a type of `Union{Float64, Int64, String}`, which is to say, it has no idea which `foo` it's going to call ahead of time. I will note that in the REPL, all the suboptimal stuff is highlighted in red, making this a very nice debugging tool for immediately finding poorly-inferred sections of code. If we ask the compiler to run this code through its optimization passes as well by passing the `optimize=true` flag to `@code_warntype`, we can even see how the compiler tries to inline `foo` by turning the call to `foo()` into a series of `isa()` conditionals, checking the type of the return from `eval()`. I'm pasting the relevant section here, for the full example you can see the screenshot linked below, showcasing the red highlighting as well:

    10 ┄ %20 = φ (#8 => %12, #9 => %18)::String
    │    %21 = Base.Meta.parse::Core.Compiler.Const(Base.Meta.parse, false)
    │    %22 = invoke Base.Meta.:(var"#parse#4")(true::Bool, true::Bool, %21::typeof(Base.Meta.parse), %20::String)::Any
    │    %23 = Main.eval(%22)::Any
    │    %24 = (isa)(%23, String)::Bool
    └───       goto #12 if not %24
    11 ─ %26 = π (%23, String)
    │    %27 = invoke Base.string(%26::String, " "::String, %26::Vararg{String,N} where N)::String
    └───       goto #17
    12 ─ %29 = (isa)(%23, Float64)::Bool
    └───       goto #14 if not %29
    13 ─ %31 = π (%23, Float64)
    │    %32 = Base.sitofp(Float64, 2)::Float64
    │    %33 = Base.add_float(%31, %32)::Float64
    └───       goto #17
    14 ─ %35 = (isa)(%23, Int64)::Bool
    └───       goto #16 if not %35
    15 ─ %37 = π (%23, Int64)
    │    %38 = Base.add_int(%37, 1)::Int64
    └───       goto #17

[0] https://i.imgur.com/DZPWUvi.png
This only shows that it can do method type specialization to get rid of type dispatch.

Can you add a new multimethod clause to an existing if-then-else type dispatch method? For example add:

  foo(x::Baz) = x.foo
Or is harm still done because the method is not open for extension when written like that?
You can't edit a function like that. If you add a new function definitoin for

    foo(x::Baz)
then you will never enter the method that has those if else statements. This is the benefit of multiple dispatch. You can't add more if else statements inside a method that is already written but you can add new methods.
Manual dispatch can be an antipattern hindering composability though. If package A does:

    function foo(x)
      # some lines of code
      y = x isa Int ? 2.0 : 3x
      # some other lines that do things with y
    end
Then package B would have to define foo(x::MyType), copy the contents from package A, change the `y = ...` bit, and make sure the rest stays in sync with package A.

If instead package A did this:

    foo_magic(x::Int) = 2.0
    foo_magic(x) = 3x

    function foo(x)
      # some lines of code
      y = foo_stuff(x)
      # some other lines that do things with y
    end
Your package B could just add an extra definition of foo_magic(x::MyType) instead.
Missing harm considered harmful.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal