Hacker Newsnew | past | comments | ask | show | jobs | submit | nemanja's commentslogin

Yeah, too bad. It’s actually quite an innovative and cool design. Shoots pretty good for a striker (still a far cry from CZs and 2011s). The ecosystem also started to develop around it (eg 1911 angle grips, high quality holsters, etc.) Sig optics and accessories also got quite good, too.


Lots of tradeoffs. If you invent a new codec, it's unlikely to make it into hardware for a while (even AV1 encoders are not yet as widely supported) and therefore you will have to do encoding and even decoding in CPU, which takes away resources from the workload. h.264 is still probably the best general purpose codec for real-time desktop streaming - low bandwidth requirements, 444 support, build to lossless, low latency, moderate CPU usage if GPU is not available, and well supported in GPUs for a long time (e.g. even back to Kepler).


Compute overhead of H.264 encoder is non-negligible for a VM host where I want all my CPU cycles to go to user VMs. Datacenter-class Intel CPUs (Xeon) don't include H.264 encoders in hardware. QuickSync circuitry is generally limited to consumer-grade CPUs. Not to mention MPEG licensing issues.

AV1 eliminates MPEG licensing issues, but encoding in hardware is even more limited. Also, AV1 is great for encode-once use cases (e.g. YouTube) since it's heavily geared towards reducing bandwidth requirements vs. encode speed. It's workable for real-time streaming in the lowest settings, but H.264 is still better overall.


Note that this is a bit of a POV thing. For one, CPU cycles handling display also go (indirectly) to your user. And if your users gets a crisper and better picture with less bandwidth due to a modern codec then it can be also seen as win in my book.

Modern CPUs more often have the building blocks included for video encoding, and getting one of those, or a dedicated GPU, probably makes sense if the Users/VMs workload depends on graphical output.

That said, you're definitively also right that it won't be a win for every use case on every hardware, so definitively something to look at more closely, and if it really is worse than the status quo on systems without dedicate GPU and where the CPU has now HW accelleration than the status quo, which I doubt, then adding an opt-out will definitivelys make sense.


Wouldn't it also be a problem that, IIUC, a CPU only has one encoding engine, so you could only have one active stream (at full speed), in a multi-tenant scenario?


RDP is aimed at a different use case than VNC. Proxmox and other virtualization managers (e.g. VMWare, Nutanix) use VNC because you get a stream directly from the hypervisor (e.g. KVM, ESX) which is very useful for low-level debugging. The VNC protocol also has very low overhead (you don't really want h264 encoding CPU overhead on your VM host). VNC is not really intended for remote desktop use cases, which require higher fidelity/frame rate, etc.

So -

* VNC: Low overhead / Low fidelity

* RDP (and other remote desktop protocols, e.g. Frame Remoting Protocol, Horizon Blast, Citrix ICA/HDX): Higher overhead / High fidelity


What qualifies as "low" overhead?

RDP will run without issue over a 56k modem in a low color mode to an RDP Host.


Low CPU overhead. VNC streams screen grabs with minimal (if any) compression, which results in lower CPU overhead, high bandwidth consumption and low frame rate. This is okay for the use-case of low-level VM debugging that it's used for in context of virtualization management systems, not so great for desktop remoting.

While RDP may run okay on 56k with low color mode for some use cases (e.g. simple Windows admin), it requires significantly more bandwidth and compute overhead (either CPU or GPU) for other more advanced use-cases (e.g. video editing, CAD etc.)


That might practically be where VNC finds usage today, but when it was introduced in the 90s, remote desktops were the intended use case.

"In the virtual network computing (VNC) system, server machines supply not only applications and data but also an entire desktop environment that can be accessed from any Internet-connected machine using a simple software NC." -- https://www.cl.cam.ac.uk/research/dtg/attarchive/pub/docs/at... (1998)


Given the CPU load I've witnessed on VNC servers, I don't think "low overhead" is right these days.

VNC was designed for remote desktop use. All the other streaming features came along later. I don't see why RDP would make for a worse choice here, other than that Windows VM integration would make for an better solution.

RDP used to be far inferior because it was proprietary Microsoft stuff with buggy open source clients and undocumented servers that kept changing stuff around. These days, open source RDP server software is actually quite solid. I don't know if Gnome/KDE leverage the partial update mechanism that makes RDP so useful on Windows (doesn't seem to seeing the performance I'm getting out of VMs), but I find RDP to be a lot more useful for interactive desktop streams than VNC.


> I don't know if Gnome/KDE leverage the partial update mechanism

I guess that would be something for the wayland compositor to manage. Maybe a wayland compositor that is also an RDP server? or maybe they're all like that already?


Also note that there is a critical difference in how they talk to the OS:

* VNC (and other non-RDP solutions like TeamViewer etc): fully independent application, does not change how Windows works because it's effectively just an interactive screen recorder running for your user account.

* RDP: is an actual Windows remote user session that hijacks the computer (so a local user can't see what's happening) and hooks directly into Windows with its own device bindings and login properties (e.g, you can't just click start -> shut down, instead you need to command-line your way to victory).

If you want to remote into a machine that's playing audio without interfering with that, RDP is flat out not an option. Even if you pick "leave audio on the remote", the fact that RPD forces windows to use a different audio device is enough to interfere with playback.


RDP doesn't need to tie into the OS like that. Plenty of ways to run X11 over RDP, for instance. And unlike in VNC, you can actually use the forward/back buttons on your mouse!

RDP in Windows happens to be implemented using some fancy tricks that make it a much better OS for remote work than any Linux distro, but that doesn't mean that's the only possible implementation. Whatever logic can be used to detect block updates in VNC works just as well over RDP. Audio over RDP also works fine on both Windows and Linux so I don't see what the problem would be anywhere else.

As for the shutdown thing, Linux seems to do that too. Makes sense if you use your computer as a terminal server, I guess. I don't reboot my computer over RDP enough to care, really. Still, that's just an implementation choice, nothing to do with the protocol itself.


Right, so, RDP by the people who invented it =)


Well if you keep rudder aligned with the engine (i.e. parallel) you are really using both, not just the engine.


Timing could’t be better. VMWare is actively firing and pissing off large swats of their customer base and basically Nutanix is the only serious alternative for onprem.

What is the total overhead (in terms of cores, memory) of the management layer with Oxide (incl. block storage, vmm, etc.)?


Really grateful for the major contribution Google made to the WebRTC over the years, driven by the Stadia effort. They relatively quickly turned it into a viable, production worthy, real-time protocol. Brought up the state of the art in browser-based streaming and reduced complexity in a big way. There were things you simply couldn't do in the browser before WebRTC (e.g. UDP streaming) and many other things were significantly more complex and browser-specific (e.g. tapping into hardware decoders). They were also very receptive to external contributions, which is really nice to see in a major corporate-driven open source project.


While Stadia did cause them to do more work on WebRTC (AFAIK mostly with latency), their WebRTC efforts--and you are referencing high-level stuff, not low-level Stadia-specific details--was mostly driven by Google Hangouts, not Stadia.


Food shortage has nearly nothing to do with climate change and nearly everything to do with sanctions against Russia, resulting in skyrocketing prices of potash, nitrogen and ultimately fertilizer. Combine that with the fact that Ukraine and Russia are also major wheat exporters. Aside from China, most of the countries are running very lean food reserves, resulting in added pressure.


Climate change has a definite impact on crop yield. The recent impact on wheat crop in India is a case in point:

https://phys.org/news/2022-04-india-wheat-crop-snags-export....

> An unusually early, record-shattering heat wave in India has reduced wheat yields [...] Climate change has made India's heat wave hotter, said Friederike Otto, a climate scientist at the Imperial College of London [...] "But now it is a much more common event—we can expect such high temperatures about once in every four years," she said.

> India's vulnerability to extreme heat increased 15% from 1990 to 2019, according to a 2021 report by the medical journal The Lancet.


Don’t blame the sanctions. The sanctions didn’t bomb the wheat fields of Europe into shit and blockade all the seaports in Ukraine


Correct. HN loves the climate change boogey man theology


Hoping you are being ironic here rather than moronic


This is pretty non-sensical. Banks are incentivized to price IPOs at the highest possible price, their comps are directly linked to the proceeds.

If IPO pop was something nefarious, how do you explain IPO pop of Goldman Sachs stock? They ran their own IPO and you can be sure as hell that partners didn't want to leave any money on the table.

In general, IPO pop is an interesting phenomenon and it's not fully explained in the literature.


It's easily explained by accepting that the IPO price is a guess in the first place, and that the negative publicly of guessing wrong and having a significant IPO decrease is more professionally embarrassing than having an increase.


Dont the bankers go through a discovery process where they find buyers pre-IPO? Its not like a blind guess


Those buyers are institutional investors and they get a huge discount.


> They ran their own IPO and you can be sure as hell that partners didn't want to leave any money on the table.

The optimal return for partners probably is giving the institutional investors a pop on even your own IPO so you keep them as investors for future IPOs where you are collecting fees. When the institutional investors lose money on an IPO, are they going to come back to you for the next one?


All the money is made by handing the bank's best customers massive instant gains.


The answer, like most things in life, lies in between.

Yes, in an ideal world you "price it right" and on IPO day the stock doesnt move up or down from the opening price.

Optically, it looks a lot better to price low and have the stock rise.

Additionally, there is something called an over-allotment option (aka greenshoe) that allows banks to sell more shares than initially allotted, to help stabilize the IPO price. This is usually more $ for the banks.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: