Leverages and confidence from the credit agency (be it banks or private investments), and the higher possiblity of approving the borrowing, and thus getting more shitty debts to be made, and contribute more to the total Gee-Dee-Pee which is the holy grail those economists chase after
Basically better rates to go into more debt. More importantly (and part of the risk) is that they have a safety hatch if they really need to exit the business.
Really I would love to know how parse context sensitive stuff like typedef which will have "switched" syntax for some tokens. Would like to know things like "hoisting" in C++, where you can you the class and struct after the code inside the function too, but I just find it hard to describe them in rigorous formal language and grammar.
Hacky solution for PEG such as adding a context stack requires careful management of the entry/exit point, but the more fundamental problem is that you still can't "switch" syntax, or you have to add all possible syntax combination depending on the numbers of such stacks. I believe persistent data structure and transactional data structure would help but I just couldn't find a formalism for that.
Another possible solution is the usage of functional parsers (e.g.: [0]) and making use of some form of the ‘do’ notation. Each step makes its result available to all subsequent parsers.
I meant exactly what the parent-comment pointed out - that C can't be parsed without a symbol table. Like the example on wikipedia:
A * B;
Which either represents a multiplication or a pointer of type A* to B, depending what the symbol table looks like. That means parsing C is impossible without these hacks, and you need to basically parse the whole file to build up this information.
A lot text editors which only support grammars can't parse this properly, there are a ton of buggy C parsers in the wild.
The issues that led to this were completely avoidable, and many languages like Pascal (which was more or less its contemporary), Go or Rust did avoid them. They don't lead to the language being more powerful or expressive.
Calling it the 'worst' might be hyperbole, but given how influential C-style syntax has become, and how much C code is out there, these issue have affected a ton of other languages downstream.
So you were criticizing the C language syntax, without considering the context which it was designed in.
Just to give this context a little bit more substance, Pascal was designed to work on a mainframe which could address up to 4MB of RAM, with a typical setup of around 1MB (it's actually not the real amounts: the CDC-6600 the values are 128Kwords, but it had 60 bits word). These machine were beasts designed for scientific computation.
The first C compiler was implemented on a PDP-11, which could handle up to 64KB of RAM, and had 16bits words.
I assume that these constraints had a heavy influence on how each language was designed and implemented.
Note that I wasn't aware of all these details before writing this comment, I had to check.
Yes, in general Rust is faster than C, I would argue, because there are some problems the hinders C performance such as strict aliasing and volatile data simply doesn't exist in Rust, and immutable const propagation and const evaluation works too.
Yes, the same way is that Fortran is faster than C due to stricter aliasing rules.
But in practice C, Rust and Fortran are not really distinguishable on their own in larger projects. In larger projects things like data structures and libraries are going to dominate over slightly different compiler optimizations. This is usually Rust's `std` vs `libc` type stuff or whatever foundational libraries you pull in.
For most practical Rust, C, C++, Fortran and Zig have about the same performance. Then there is a notable jump to things like Go, C# and Java.
> In larger projects things like data structures and libraries are going to dominate over slightly different compiler optimizations.
At this level of abstraction you'll probably see on average an effect based on how easy it is to access/use better data structures and algorithms.
Both the ease of access to those (whether the language supports generics, how easy it is to use libraries/dependencies), and whether the population of algorithms and data structures available are up to date, or decades old, would have an impact.
Caffine is quite interesting because I often got even more tired after 30 minutes drinking some coffee.
The first 30 minutes indeed got me very excited, but then I will fall asleep soon after.
The same thing happened to me right now with energy drink such as Redbull or Monster. Therefore I mostly drink them for some competitive activities that only last short hours
I'm the same with coffee as caffeine, but I can drink a sugar free energy drink (Moister Ultra) and the caffeine in that does the job without any of the sleepiness sideffects.
Well you don't manage them. If you really find something interesting, you often start writing it down in some work already...for example `cargo new` and then add a bunch of packages, start getting it working on...
That's exactly what I've been having for the last 20 years. If something motivates you, you do it non-stop, until you are bored, switch to the next thing...it happens around 2 to 3 days during the "hype" period, then you suddenly got off to new things.
That's why I have hundreds of POCs and toy projects at hand, but only a few of them materialized.
Is there a WireGuard equivalent that does L2 instead of L3? Need this for a virtual mesh network for homelabbing. I have this exact setup, running VXLAN or GENEVE over WireGuard tunnel using KubeSpan from Talos Linux but I simply think having L2 access would make load balancer much easier
I achieve load balancing by running native wireguard on a vps at hetzner, I've got a native wireguard mesh, I believe Talos can do the same, where the peers are manually set up, or via. tailscale etc. I then tell k3s that it should use the wireguard interface for vxlan, and boom my kubernetes mesh is now connected.
flannel-iface: "wg0" # Talos might have something similar.
I do use some node-labels and affinities to make sure the right pods end up in the right spot. For example the metallb annoucer always has to come from the hetzner node. As mentioned in my reply below, it takes about 20ms roundtrip back to my homelab, so my sites can take a bit of time to load, but it works pretty well otherwise, sort of similar to how cloudflare tunnels would work, except not as polished.
> I have this exact setup, running VXLAN or GENEVE […]
I see VxLAN mentioned all over the place, but it seems that GENEVE isn't really implemented as much: besides perhaps being a newer protocol, is there a reason(s) why in your opinion? Where do you personally use each?
What's funny to me is that the amount of "same here", "+1" comments are still prominent even if GitHub introduced an emoji system. It's like most people intentionally don't want to use that.
(Just kidding.) Some of it is unawareness of the 'subscribe' button I believe, occasionally you'll see someone tell people to cut it out and someone else will reply to the effect of wanting to know when it's fixed etc. But it's also just lazy participation, echoing an IRL conversation I suppose, that you see anywhere - replied instead of up votes on Reddit and to a slightly lesser extent here for example.
Do anyone have any recommendations on "slabtops", i.e computers with a C64 form factor but also a small screen embedded on the keyboard, so I can use it as a scientific computer rather than a laptop
okay, at first I thought you are selling Tor access or vanity hidden service domains as Tor stands for The Onion Router, but it turns out you are selling real onions
reply