Efficiency of Ada should be pretty close to C, but that strange thing Ada does where you define your own numeric types like
type Day_type is range 1 .. 31;
creeps me out a little, it makes me think I have to throw Hacker's Delight [1] in the trash if I want to use it but I could be wrong. It makes me think of the the deep history of computing, where there were a few straggler architectures (like the PDP-10 [2]) that didn't use the 8-bit byte were around and when Knuth wrote a set of programming books based on an instruction set that wasn't necessarily binary.
(Lately I was thinking about making a fantasy computer that could be hosted in Javascript which was going to be RISC but otherwise totally over the top, like there would be bitwise addressing like the PDP-10. First I wanted it to be 24 bit but then I figured I could pack 48 bits in a double so I might as well. It even would have a special instruction for unpacking UTF-8 characters and a video system intended for mixing latin and CJK characters. Still boils down to an 8-bit byte but like the PDP-10 it could cut out 11-bit slices or whatever you want. I was going to say screw C but then I figured out you could compile C for it)
I recommend Marc-André Argentino's research on the Com; he's a conspiracy researcher who got his PhD on QAnon. He leans left, if that's a factor for you.
That piece doesn't get specific at all about Com activities but be aware that some of the manifestos and other material he discusses is quite disturbing.
There's a subtext here of "what do I do when the high-level libraries like Sodium don't do exactly what I need", and the frank answer to that is: "well, you're in trouble, because consulting expertise for this work is extraordinarily expensive".
We have an SCW episode coming out† next week about cryptographic remote filesystems (think: Dropbox, but E2E) with a research team that reviewed 6 different projects, several of them with substantial resources. The vulnerabilities were pretty surprising, and in some cases pretty dire. Complaining that it's hard to solve these problems is a little like being irritated that brain surgery is expensive. I mean, it's good to want things.
Reasonable developers are qualified to do those things. But to build a full-featured authentication subsystem for their webapp? If it's something that holds any kind of reasonably private info, I'm not so sure.
Sure, a reasonable developer will use some sort of PBKDF to hash passwords. But when users need a password reset over email, will they know not to store unhashed reset tokens directly in the database? Will they know to invalidate existing sessions when the user's password is reset? Will they reset a browser-stored session ID at login, preventing fixation? And on and on and on. The answer to some of these questions will be yes, but most developers will have a few for which the answer is no. Hell, I've probably built more auth systems than most (and have reported/fixed a few vulnerabilities on well-known open-source auth systems to boot) and I'm honestly not sure I'd trust myself to do it 100% correctly for a system that really mattered.
Even outside of "holding the crypto wrong", these things have sharp edges and the more you offload to an existing, well-vetted library the more likely you are to succeed.
I love reading blog posts on oauth2 and oidc. I have a mental model on how it works but every person has their own way to describe the flows. So far my favorite article has been oauth-oidc from first principles[1] and why each piece of the protocol is useful.
This being hackernews, any comment worthwhile cannot be devoid of criticism. Trust-on-first-use is used incorrectly here -- saving the previous authorization scopes is just caching. TOFU has a specific definition in security: it's when you're establishing a secure channel but you haven't shared a secret or public key a-priori -- it makes it impossible to guarantee that the counter-party is whom they say they are. Very concretely TOFU is a diffie hellman key exchange with a shared secret that can be MitMed. Through use in time the shared secret gains integrity because the probability of a persistent MitM accross channels degrades. The most common place TOFU is encountered is when connecting via ssh to a server and the server accepts your connection because you're in their authorized_keys but the server's key is not in your known_hosts.
Seems more than happy to talk about Tienanmen, Xi, etc. starting at line 170 with the very primitive method of wrapping the query in its own "<think>...</think>" syntax even though it's the user role. Uyghurs are more strictly forbidden as a topic, as are its actual system prompts. None of this is serious jailbreaking, it was just interesting to see where and when it drew lines and that it switched to simplified Chinese at the end of the last scenario.
But there are other factors that feed into it: physical, emotional, social.
I highly recommend attention span by Gloria Mark, The Power of Engagement by Jim Loehr, and for those who want to change their life, Tiny Habits by BJ Fogg.
Tintin is not a French comic, is from Belgium. And the first two comics are controversial, but also a product of that age. Blue Lotus also depicts a terrible image of Japanese.
Asterix is more consistent, complex and rewarding for adults. A part of Asterix has dated also because the endless cameos of real people popular in that years don't mean so much for new generations.
Other European comic in the big leagues is the Spanish Mortadelo and Filemon. If Tintin is adventure and Asterix is clever wordplay, Mortadelo embraces directly sadistic fun in its wild own way. If you don't know them still, see the film "Mortadelo and Filemon: Mission implausible" for a good glimpse of that world. You'll thank me later.
Two resources which helped me improving my writing, when I was writing my thesis were "How to Write Mathematics" by Paul R. Halmos and "Mathematical Writing" by Donald E. Knuth et al.
I would always start with Halmos to get into the spirit of perusing clear and precise communication.
The "Bad/Better/OK" suggestions especially reminded me of the discussions in the lecture notes from Knuth et al.
And at a third step a linter such as the proposed one is probably helpful, if something slips through.
I think these resources are essential for anyone who writes on any subject which at least involves definitions here and there.
specifically avoid resources written by and for physicists.
the model of quantum mechanics, if you can afford to ignore any real-world physical system and just deal with abstract |0>, |1> qubits, is relatively easy. (this is really funny given how incredibly difficult actual quantum physics can be.)
you have to learn basic linear algebra with complex numbers (can safely ignore anything really gnarly).
then you learn how to express Boolean circuits in terms of different matrix multiplications, to capture classical computation in this model. This should be pretty easy if you have a software engineer's grasp of Boolean logic.
Then you can learn basic ideas about entanglement, and a few of the weird quantum tricks that make algorithms like Shor and Grover search work. Shor's algorithm may be a little mathematically tough.
realistically you probably will never need to know how to program a quantum computer even if they become practical and successful. applications are powerful but very limited.
"What You Shouldn't Know About Quantum Computers" is a good non-mathematical read.
Hustle culture nonsense aside, I find using whisper (AI) transcription instead of typing to be super efficient.
My workflow involves setting up a whisper server, downloading the Whispering(1) app on my computer, and binding it to a shortcut on my keyboard and mouse. Whenever I want to write something down, I just hit the shortcut, dictate and it transcribes instantly. With a Nvidia GPU (1070 in my case), transcription is nearly instantaneous. Although I haven’t set it up on my MacBook yet, I suspect it will be just as fast with Apple Silicon
> For every project/job/app that needs the AWS levels of resilience (...)
I don't think you're framing the issue from an educated standpoint. You're confusing high-availability with not designing a brittle service by paying attention to very basic things that are trivial to do. For example, supporting very basic blue-green deployments that come for free in virtually any conceivable way to deploy services. You only need a reverse proxy and just enough competence to design and develop services that can run in parallel. This is hardly an issue, and in this day and age not being able to pull this off is a hallmark of incompetence.
> The juniors only get to run the show and introduce exotic, non-boring technology (asynchronicity, event-sourcing, eventual consistency, CQRS etc.) after the seniors have admitted defeat.
lol. This is definitely not my experience. Most problems are pretty boring and can be solved with a judicious use of boring tech. It isn't until you get to a sufficiently large scale where you need to introduce these techniques. And only if they actually further the goal.
Many times the existing system is doing something in a really dumb way because the original authors were organically growing as they explored the problem domain and that puts the system into a box of thinking, similar to your LLM spitting out trash because the context got polluted, or simply inertia. So the exotic tech is introduced to solve a symptom to the underlying issue without the crucial step of reconsidering the box the system sits in. The requirements at scale are now fundamentally different and therefore the solution should be reconsidered.
If the seniors involved completely miss this step then I question the breadth of their experience because this is common when crossing a scale threshold. Being old or having worked at the same company for 10 years doesn't automatically mean someone is truly skilled and could actually mean their experience is extremely limited.
I've done this with friends/family a couple times and wrote up a tutorial that I use as reference every couple months.
Has an optional step to password-protect the contents if you have any qualms with security-by-obscurity of using an unlisted torrent on a public tracker.
As a quick workaround, you can set a CSS filter on the whole page: Either use dev tools to put a rule `filter: hue-rotate(60deg);` on the `body` element, or simply run `javascript:void(document.body.style.filter='hue-rotate(60deg)')` from the url bar.
I moved to academia, after six years of working in the industry mainly with C# and SQL. It was a deliberate attempt to find a way to work with Prolog. I guess that's a bit immature of me but I fell in love with Prolog in the second year of my CS degree and I couldn't get over it so here I am.
I did an MSc in data science first, then started a PhD to study Inductive Logic Programming (ILP), which is basically machine learning × Prolog (although there's also ASP ILP these days). I got my PhD last summer and I'm now doing a post-doc on a robotics project with Meta-Interpretive Learning (MIL), a recent form of ILP. Here's my latest publication:
Which is still a bit proof-of-concept. We're still at the very early stages of practical applications of MIL and so there's a lot of foundation work to do. Bliss :)
(Lately I was thinking about making a fantasy computer that could be hosted in Javascript which was going to be RISC but otherwise totally over the top, like there would be bitwise addressing like the PDP-10. First I wanted it to be 24 bit but then I figured I could pack 48 bits in a double so I might as well. It even would have a special instruction for unpacking UTF-8 characters and a video system intended for mixing latin and CJK characters. Still boils down to an 8-bit byte but like the PDP-10 it could cut out 11-bit slices or whatever you want. I was going to say screw C but then I figured out you could compile C for it)
[1] https://en.wikipedia.org/wiki/Hacker%27s_Delight
[2] https://en.wikipedia.org/wiki/PDP-10