Hacker Newsnew | past | comments | ask | show | jobs | submit | raluk's commentslogin

Over the hollidays I was working on Steve Balmers interview puzzle. https://rahne.si/optimisation/2026/01/07/steve-ballmer-inter...

What I am most proud of is that I got the solution in the corse of apporx 1 week working on this!


Using only cold water for showers.

What did you feel were the changes before cold showers that you observed.

It improves immune system and geneal wellbeing. It is pain that you can get used to and I kind of enyoj it now in some weird way. It is hard, requires some dedication and brings some benefits, but does not requre extra time or planning. Great morale booster.

What are protental issues with compiler, by just disabling borrow checker? If I recall correctly some compiler optimisations for rust can not be done in C/C++ because of restrictions implied by borrow checker.


Rust can set restricts to all pointers, because 1 mut xor many shared refs rule. Borrow checker empowers this. https://en.wikipedia.org/wiki/Restrict


The crazy part about this is that (auto) vectorization in Rust looks something like this: iter.chunks(32).map(vectorized)

Where the vectorized function checks if the chunk has length 32, if yes run the algorithm, else run the algorithm.

The compiler knows that the chunk has a fixed size at compile time in the first block, which means it can now attempt to vectorize the algorithm with a SIMD size of 32. The else block handles the scalar case, where the chunk is smaller than 32.


Hah I love things like this, where the compiler leaks out.


Without the borrow checker, how should memory be managed? Just never deallocate?


The borrow checker does not deal with ownership, which is what rust’s memory management leverages. The borrow checker validates that borrows (references) are valid aka that they don’t outlive their sources and that exclusive borrows don’t overlap.

The borrow checker does not influence codegen at all.


It would be the same as in any language with manual memory management, you'd simply get a dangling pointer access. The 'move-by-default' semantics of Rust just makes this a lot trickier than in a 'copy-by-default' language though.

It's actually interesting to me that the Rust borrow checker can 'simply' be disabled (e.g. no language- or stdlib-features really depending on the borrow checker pass) - not that it's very useful in practice though.


The same as C++, destructors get called when an object goes out of scope.


"Math nerd explains how to spend 3 days proving 1+1=2" -> Original "From Zero to QED: An informal introduction to formality with Lean 4" https://news.ycombinator.com/item?id=46259343


Years ago I wrote c++ library for stream compostion. Something like C++20 ranges. It turns out that as long as you compose everything with lambdas, compiled code is same as it would be with naive loops. Everything gets optimised.

For example, you can write sum of numbers less than n as:

  count(uint64_t(0)) 
   | take(n) 
   | sum<uint64_t>();
Clang converted this into n*(n-1)/2.


How much is fft used for AI? Seems that attention and convolution could benefit from this.


There are architectures, such as FNO, that utilize FFTs within them. These are particularly popular in deep learning weather prediction problems.


Having lexicial scope it is same as -> defer fn{if(some condition) x() }() within scope.


Except 'some condition' may change, can be long, or expensive, so you likely need an extra variable.


Given that you die, when one vital organ dies it will eventaully hapen in this way or another. I read somwhere and that stuck in my brain, that maxmimal logevity for humans is estimated to be approximetly 125 years.


>> I read somwhere and that stuck in my brain, that maxmimal logevity for humans is estimated to be approximetly 125 years.

Oh, that's just derived from old theology.

Genesis 6:3, 'Then the LORD said, “My Spirit will not contend with humans forever, for they are mortal; their days will be a hundred and twenty years.”'

This kinda got spread throughout the zeitgeist long ago as a "maximal lifespan", but the reality is that only 3 in 10,000 even make it to 100. There's no hard cutoff, but functionally essentially no one gets to 110.

Scientifically, there's no hard reason we couldn't increase our lifespans indefinitely, but we've got a lot of work to do before we'll be able to get a reasonable number of people up to 125.


That doesn’t sound like you’re disagreeing with gp.


One thing that most languages are lacking is expressing lazy return values. -> await f1() + await f2() and to express this concurently requres manually handing of futures.


you mean like?

   await Join(f1(), f2())
Although more realistically

   Promise1 = f1(); Promise2 = f2();
   await Join(Promise1, Promise2);
But also, futures are the expression of lazy values so I'm not sure what else you'd be asking for.


This is what i hand in mind whit "manually handing of futures". In this case you have to write

   Promise1 = f1(); Promise2 = f2();
   v1,v2 = await Join(Promise1, Promise2);
   return v1 + v2
I think this is just too much of synthactic noise.

On the other hand, it is necessary becase some of underlying async calls can be order dependend.

for example

    await sock.rec(1) == 'A' && await sock.rec(1) == 'B'
checks that first received socket byte is A and second is B. This is clearly order dependant that can't be executed concurrently out of order.


I suppose you'd have to make

    SumAsync(F1(),f2());
Buy it's kind of intractable, isn't it? Your language has to assume order dependency or independency and specify the other. Most seem to stick with lexical ordering implies execution order.

I think some use curly brace scoping to break up dependency. I want to say kotlin does something like this.

This is why they say async is a viral pattern but IMO that's because you're adding specificity and function coloring is necessary and good.


Which languages do have such a thing?


Rust does this, if you don’t call await on them. You can then await on the join of both.


Is the "join" syntax part of the language?


Why is having it be syntax necessary or beneficial?

One might say "Rust's existing feature set makes this possible already, why dedicate syntax where none is needed?"

(…and I think that's a reasonably pragmatic stance, too. Joins/selects are somewhat infrequent, the impediments that writing out a join puts on the program relatively light… what problem would be solved?

vs. `?`, which sugars a common thing that non-dedicated syntax can represent (a try! macro is sufficient to replace ?) but for which the burden on the coder is much higher, in terms of code readability & writability.)



Then it doesn’t apply in this case.


why?


Because parent asked for a language feature, not runtime: "One thing that most languages are lacking..."


I haven't linked a runtime but the specific feature, which alleviates manual handling of multiple futures when awaiting multiple futures concurrently, expressed in the language named Rust.


I suppose Haskell does, as `(+) <$> f1 <*> f2`.


In there is also ApplicativeDo that works nicely with this.

    do 
      x <- f1
      y <- f2
      return $ x + y
this is evaluated as applicative in same way.


That's because f2's result could depend on whether f1 has executed.


Great exercise driven course is: https://github.com/system-f/lets-lens


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: