If you look at their personal profile they've made a lot of content with activity and collaboration outside of the main beancount repo, related to beancount. They definitely meet the criteria for 'FOSS dev' in that it's FOSS, too.
Thanks. I was not denying the author FOSS creds generally, just expecting actual involvement. It seems this is someone who cares about the project (and I do like the sound of the project).
I'm just, you know, pretty sensitive to HN submissions trying to sell me something.
In the SysV ABI for AMD64 the AL register is used to pass an upper bound on the number of vector registers used, is this related to what you're talking about?
On Linux you can turn this off. On some OS's it's off by default. Especially in embedded which is a major area of native coding. If you don't want to handle allocation failures in your app you can abort.
Also malloc can fail even with overcommit, if you accidentally enter an obviously incorrect size like -1.
I think this is true unfortunately, and the question of how we get back to a liberal and social state has many factors: how do we get the economy working again, how do we create trustworthy institutions, avoid bloat and decay in services, etc. There are no easy answers, I think it's just hard work and it might not even be possible. People suggesting magic wands are just populists and we need only look at history to study why these kinds of suggestions don't work.
It’s funny how it’s completely appropriate to talk about how the elites are getting more and more power, but if you then start looking deeper into it you’re suddenly a conspiracy theorist and hence bad. Who came up with the term conspiracy theorist anyway and that we should be afraid of it?
Just like we always have: a world war, and then the economy works amazing for the ones left on top of the rubble pile where they get unionized high wage jobs and amazing retirements at an early age for a few decades, while everyone else will be left toiling away to make stuff for cheap in sweatshops in exchange for currency from the victors who control the global economy and trade routes.
The next time the monopoly board gets flipped will only be a variation of this, but not a complete framework rewrite.
I always like to move as much as possible into the repo itself, 'issues' etc in a TODO, build scripts, or however you want to achieve that, so you can at least carry on uninterrupted when the host is down.
Many of those sites, incredible for a $4 trillion valued company, are managed by teams themselves on their own infra, thus when there is one of those restructuring rounds that big corps love doing almost every year, some of that gets lost.
This and also due to the fact that positions that involve writing and managing documentation typically do not have great paths for promotions.
Not just at Microsoft, but it's an entire industry issue. It's not a job most Software companies value, so ambitious people constantly leave for better positions and the jobs constantly get moved around to the cheapest cost center where ownership and knowledge gets lost and quality declines.
Basically all native libraries inevitably have bad or difficult to follow documentation like this, proprietary or open source. Vulkan is the exception as it's a standard so needs to be very clear so all stakeholders can implement it correctly.
Usually I find if you're using an open source library you need the whole source checked out for reference, better than proprietary libraries where you need to pay and sign an NDA to get that access or equivalent support.
Vulkan spec is missing tons of stuff. Implementers check they pass the conformance tests (tho those also miss stuff)
directx also has conformance tests.
The directx specs are arguably better in many ways than the vulkan specs. They go into bit level details how various math is required to work, especially in samplers
I'm sure it misses stuff, but generally a 'spec' is better than a 'doc' for the reason that you need enough info to at least guess how a spec is implemented, whereas a doc can leave everything out and as long as the programmer has headers and some examples they can probably do 90% of what is needed.
Extensions to Khronos standards are hardly that greatly documented.
A TXT dump of the proposal, with luck a sample from the GPU vendor, and that is all.
Vulkan was famously badly documented, one only has to go to LunarG yearly reports regarding community feedback on Vulkan, and related action points.
OpenGL 4.6 never has had a red book editon, Vulkan only had a red book for 1.0, OpenCL and SYSCL just have the PDF reference, not all Khronos APIs have a cheatsheeet PDF on Khronos site.
> Vulkan is the exception as it's a standard so needs to be very clear so all stakeholders can implement it correctly.
Lol... while the Vulkan documentation situation is a lot better than OpenGL it's not any better than the documentation of other 3D APIs, especially when trying to make sense of extensions (which depend on other extensions, which in turn depend on other extensions - once you're at the end of the breadcrumb trail you already have forgotten what the original question was).
I'm not them but whenever I've used it it's been for arch specific features like adding a debug breakpoint, synchronization, using system registers, etc.
Never for performance. If I wanted to hand optimise code I'd be more likely to use SIMD intrinsics, play with C until the compiler does the right thing, or write the entire function in a separate asm file for better highlighting and easier handing of state at ABI boundary rather than mid-function like the carry flags mentioned above.
Generally inline assembly is much easier these days as a) the compiler can see into it and make optimizations b) you don’t have to worry about calling conventions
> the compiler can see into it and make optimizations
Those writing assembler typically/often think/know they can do better than the compiler. That means that isn’t necessarily a good thing.
(Similarly, veltas comment above about “play with C until the compiler does the right thing” is brittle. You don’t even need to change compiler flags to make it suddenly not do the right thing anymore (on the other hand, when compiling for a different version of the CPU architecture, the compiler can fix things, too)
It's rare that I see compiler-generated assembly without obvious drawbacks in it. You don't have to be an expert to spot them. But frequently the compiler also finds improvements I wouldn't have thought of. We're in the centaur-chess moment of compilers.
Generally playing with the C until the compiler does the right thing is slightly brittle in terms of performance but not in terms of functionality. Different compiler flags or a different architecture may give you worse performance, but the code will still work.
“Advanced chess is a form of chess in which each human player uses a computer chess engine to explore the possible results of candidate moves. With this computer assistance, the human player controls and decides the game.
Also called cyborg chess or centaur chess, advanced chess was introduced for the first time by grandmaster Garry Kasparov, with the aim of bringing together human and computer skills to achieve the following results:
- increasing the level of play to heights never before seen in chess;
- producing blunder-free games with the qualities and the beauty of both perfect tactical play and highly meaningful strategic plans;
- offering the public an overview of the mental processes of strong human chess players and powerful chess computers, and the combination of their forces.”
Well I have benchmarks where my hand-written asm (on a fundamental inner function) beat the compiler-generated code by 3× :) Without SIMD (not applicable to what I was trying to solve).
And that was already after copious `assert_unchecked`s to have the compiler assume as many invariants as it could!
> “play with C until the compiler does the right thing” is brittle
It's brittle depending on your methods. If you understand a little about optimizers and give the compiler the hints it needs to do the right things, then that should work with any modern compiler, and is more portable (and easier) than hand-optimizing in assembly straight away.
Well in my case I had to file an issue with the compiler (llvm) to fix the bad codegen. Credit to them, it was lightning fast and they merged a fix within days.
Of course you can often beat the compiler, humans still vectorize code better. And that interpreter/emulator switch-statement issue I mentioned in the other comment. There are probably a lot of other small niches.
In general case you're right. Modern compilers are beasts.