Unlike in the eighties and nineties, CPU front-end (instruction decode) is hardly an issue in modern times. Oversimplified, front-end translates instructions into something the CPU really understands. This part of the CPU used to be significant amount of die area, but not so anymore.
CPU back-end is what matters, and the answer there is that it doesn't make practically any difference.
For general purpose computing, RISC vs CISC is practically a dead topic now.
Of course you would not want a complicated CISC front-end in microcontroller. But that's another topic.
Ask the ARM server vendors that routinely canceled their projects.
Also, RISC vs CISC is for people stuck in the 80s. For example RISC-V's special sauce isn't some random "its better suited for web server workloads" theory with zero references or reasoning provided by a random guy on HN. It's widespread adoption. It's the fact that it was intentionally kept simple to make it easy to implement to the point that it probably is missing lots of functionality that we take for granted in modern servers.
Remember all the fanfare around Cloudflare blog posts for a qualcomm ARM server SoC that was never released? Yeah the performance was surprising but the amount of hardware specific code the Cloudflare guys had to write was surprising too.
> Also, RISC vs CISC is for people stuck in the 80s.
Sometimes I joke on this topic to the akin of Indo-European languages vs Sino-Tibetan languages, or in layman's term but a little offensive [1], English vs Chinese. This analogy is quite intriguing to me, since English is relatively easier than Chinese, and Chinese has more entropy than English, like RISC is much simpler than CISC, while CISC does more in one instruction than RISC in general. Of course, the true holy grail would be VLIW, which is like Esperanto, barely anyone knows and uses.
But as I can speak both languages and recognize the difference between both ISA designs I don't really think it mattered, but there will always be some stubborn people who bitch about it. This kind of behavior sometimes lead me to think about Linguistic relativity [2] and even extend the whole argue into the highly political topic of Linguistic determinism [3].
[1]: there are couple more of those Indo-European and Sino-Tibetan languages. Deutsch is an obvious candidate.
> Remember all the fanfare around Cloudflare blog posts for a qualcomm ARM server SoC that was never released?
Announcing stuff that is never released seems to be a common pattern for Cloudflare. They announced their AWS S3 compatible competitor "R2" over 18 months ago [0] without a release, and became very silent about it. At this point it is just vaporware.
As was stated in the earnings call for Q1: "We're on track for R2 to progress to open beta in Q2 and then be generally available in the second half of 2022."
Great to see some progress. But the fact that it is only announced in an earnings call - where you kind of have to answer such questions - makes me sceptical. There is probably a reason why there is no mention of R2 anywhere on the Cloudflare site or other channels targeted towards developers?
> Ask the ARM server vendors that routinely canceled their projects.
Pre-Graviton/M1, you could argue there was a chicken and egg problem on the datacentre and higher-end compute side of ARM wherein smaller vendors couldn't achieve the volume, and so couldn't achieve the economies of scale to price an ARM chip competitively and drive revenues to reinvest, so fewer binaries were built keeping it niche, so it continued to lag in compatibility and compute and its power advantage wasn't enough by itself. Once big guns like AWS joined the fray, ARM's performance was improved as well as its natural advantage in power consumption and with big budgets they have the staying power to wait for network and ecosystem effects to win in the datacentre. Ironically, that probably creates a better space now for smaller players to tease out other customization benefits for other verticals on a stronger core datacentre presence and larger customer base. But before that happened, none of them had the resources of an Intel and AMD to compete for the datacentre by themselves, even if the x86 architecture and legacy created a ceiling everyone could see and ARM was a natural suitor.
I mean, that was sort of my question. If the performance is decent then why would you not write hardware specific code, especially when operating at cloud scale like Cloudflare. I think I just lack a good understanding of the complexity at hand.