I've been to Nepal a bunch of times and I usually recommend just passing quickly through KTM to get to where you are going. The dust can be terrible and it is loud and polluted - the opposite reason to why most people generally want to go to Nepal. Better to spend more time in the mountains or Pokhara
It's winter here. That's mostly an issue during summer. Also, if you're out trekking, then that is a non-issue (especially higher up in the mountains).
> and not have to worry about the right libraries being installed on my system and whether I've generated a Makefile. Packages are easily searchable, hosted on popular platforms like GitHub, and I can file bugs and ask questions without having to join an obscure mailing list or deal with an unfriendly community.
Maybe it's just me, but that right there is the stuff of nightmares. What library, and written by who, is it going to pull in.
But what's changed is decisively not "Now I don't know which libraries will be used or who made this library" but instead "The library I wanted was easier to get because the tools work".
Agreed. I don’t think easy package management is the problem, though. Rather, it’s just triggered a Cambrian explosion of packages, and now security needs to catch up.
It's direct and blatantly relevant to the discussion that the transformer was invented in America and the cited role of immigrants in that invention, resultingly showing how ending immigration will impact future innovation.
Plenty of countries gave Huawei the same treatment the US did, and the US and its allies have the weight to impose sanctions, tariffs, etc to punish consumers within their borders for daring to consider better and cheaper options.
The allies of the US all banned Huawei because the US asked them (quite forcefully) to do so.
CXMT is already under a full set of US long arm sanctions so probably only very little of their products will ever reach western markets.
However some Chinese demand will definitely be met by CXMTs product displacing western suppliers - so maybe there is a tiny bit of relief for western consumers there.
> However some Chinese demand will definitely be met by CXMTs product displacing western suppliers - so maybe there is a tiny bit of relief for western consumers there.
I recall years of hints that the affordable housing crunch would eventually be helped by developers - even tho they're only building tons of not-affordable housing.
We're five years in. No meaningful change is visible from the perspective of folks who need affordable housing.
Based on that lesson, I expect what CXMT does there to have no meaningful effect here.
> I recall years of hints that the affordable housing crunch would eventually be helped by developers - even tho they're only building tons of not-affordable housing.
If I may ask, what cities? For example, Austin has seen a 6.6% asking price decrease for 0- to 2-bedroom units [1]. The big problem is there is an absolutely massive hole, and very few places are building "enough" to make a dent.
How could a subsidized housing number increase from building not-subsidized housing? That is illogical. The market rate housing will become cheaper and therefore more housing will be affordable to more people but you can’t make the number of “affordable housing” units go up by building anything else because “affordable housing” is a brand name for subsidized housing.
I don't know about that. All I'm pointing out is that just because US doesn't like China doesn't mean there isn't a bigger market out there. So, even if China ends up servicing that market only, that's still a big chunk of the pie. So, case in point, a Chinese DRAM maker flooding the market with cheap(er) DRAMs (or any DRAM for that matter -- thanks Micron), will end up affecting the price of DRAM in the US.
That's true. The greater the numbers, the lower the demand on the global scale (unless those all be consumed by AI too). It makes me wonder if AI data centers will never be satisfied.
Sucks for everyone else is what I'm saying. 100% of people should be allowed access, not be preempted from it in order to protect the value of exalted tech cartels.
I'd like to know more. I expect these systems are 8xvh1782. Is that true? What's the theoretical math throughput - my expectation is that it isn't very high per chip. How is performance in the prefill stage when inference is actually math limited?
You are clearly clueless about the current state of EVs. I'm willing to bet you haven't even driven or owned a Tesla or BYD. So you're uninformed at best.
All I know is I'd never buy a Tesla. Having seen them up close, the quality control is clearly not priority one. Unacceptable for a vehicle at that price.
> All I know is I'd never buy a Tesla. Having seen them up close, the quality control is clearly not priority one. Unacceptable for a vehicle at that price
You must be trolling. 'Having seen them up close' isn't a serious basis for an opinion on an any vehicle. Take a proper 24-hour test drive and then talk about build quality.
People have tried, and so far, achieving safety through trusted compilers and (fairly complicated) run-time support has been much more efficient. A small team could probably design a RISC-V CPU with extensions for hardware-assisted bounds checking and garbage collection, but any real CPU that they can built would likely have performance levels that are typical for research-oriented RISC-V CPUs. Doing the same thing in software on a contemporary commercially established CPU is going to be much, much faster.
See that's the problem. Unless this is government mandated, no sane vendor is going to pay for the performance penalty.
> Doing the same thing in software on a contemporary commercially established CPU is going to be much, much faster.
In what sense? Do you know if there's been proper research done in this area? Surely implementing the bounds checking / permissions would be faster in hardware.
I'm worried that if memory tagging becomes mandatory, it sucks the air out of the room for solutions that might have a more long-lasting impact. Keep in mind that memory tagging is just heuristics beyond very specific bug scenarios (linear buffer overflows are the prime example). The whole thing does not seem fundamentally resistant to future adaptions of exploitation techniques. (Although oddly enough, I have been working on memory tagging lately.)
Regarding performant implementations of capability architectures, Fil-C running on modern CPUs is eventually going to overtake Arm's Morello reference board because it doesn't look like there's going to be a successor to the board. Morello was based on Arm's Neoverse-N1 core and produced using TSMC's N7 process. It was a research project, but it's really an outlier because such projects hardly ever have access to these kinds of resources (both CPU IP and tape-out on a previous-generation process). It seems all other implementations of CHERI are FPGA-based.
These approaches can only detect linear overflows deterministically. Use-after-frees (temporal safety violations) are only detected with some probability. It's mostly a debugging tool. And MTE requires special firmware, which is usually not available in the cloud because the tag memory reservation is a boot-time decision.
A language runtime vibe-coded for a language that people will very likely vibe code in. Something about that makes my head hurt a little.
I guess, it's easier for LLMs to generate dynamic language code than something like Rust (or even assembly). But still, one does wonder, why not just compile down to C or ASM. But I guess the answer is the ease of debugging / maintainability.
reply