Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Oxide at Home: Propolis Says Hello (artemis.sh)
214 points by xena on March 14, 2022 | hide | past | favorite | 107 comments


Love what they are doing. And as a cloud and on-prem supporter I get what they are trying to accomplish.

If you haven't heard you should check out their podcast, "On the Metal [0]" It's a truly a gift especially if you are an elder millenial.

[0] https://oxide.computer/podcasts


The podcast is awesome. Wish they would continue with more episodes!


They've been doing Twitter Spaces for several months now, with recordings and show notes here: https://github.com/oxidecomputer/twitter-spaces Disclosure: I was the main speaker on one of their spaces.


TIL.

On one hand I'm excited at the new content but a bit weird they way they stealth shutdown the podcast and started the twitter spaces thing up without saying anything or updating their homepage. Even their twitter feed only mentions the twitter space a couple of times (and doesn't appear to link to the archive).


Whaaaaaaattt. And I've been here waiting for a new podcast, like an idiot... when they had this... Thanks for the info!


They are quite different from the podcasts. Wider range of topics and not as historically focused. I hope they do both in the future.

Recommendations technical:

- Twitter Space 12/13/2021 -- The Pragmatism of Hubris

- Twitter Space 12/6/2021 -- Tales from the Bringup Lab

- Twitter Space 11/29/2021 -- The Sidecar Switch

Historical:

- Docker, Inc., an Early Epitaph

- Twitter Space 7/5/2021 -- NeXT, Objective-C, and contrasting histories

- Twitter Space 5/31/2021 -- Silicon Cowboys

- Twitter Space 8/16/2021 -- The Showstopper Show

All of them are worth watching.


Wow, weird, how could one ever have found this? Thanks for the link!!

Oxide if you're listening, please publish these on the podcast feed to make them more accessible and followable.


Thanks!

I read a note quickly about Oxide and Twitter Spaces a while ago, but as I read the keyword Twitter with a mindset of "good for short notes and links to external sources" as I wasn't at all aware of voice capabilities, might be shortfall of using Twitter in the browser..

Thus I now have lots of interesting material to enjoy.


I'm currently reading up on this, but I'm struggling to match a use case.

It's not Openstack. It's not VMware. It's not kubernetes. It's not proxmox. It's not Xen. It's not Anthos. It's not GCDE. It's not Outposts.

So who and what is it for? Where is the use case that none of these other products fit the bill?

Especially for an on premise use case.


This article is about technical details of the product that aren't user-facing.

The business is fairly straightforward: we sell computers, a rack at a time. You as a customer can buy a rack, and put it in your data center. The rack offers an in-browser management console, built on top of an API you can use too. You use these tools to set up virtual machines. You can then use those VMs however you want. You get the cloud deployment model but with the "I buy the servers" ownership model.

There's a few different advantages depending on how you want to look at it.

Starting from a rack as the smallest unit rather than 1U brings a lot of advantages, but there aren't really vendors currently selling these sorts of things, instead "the hyperscalers" have internal teams building stuff like this. There are a lot of organizations who want hyperscale style servers but aren't going to start a division to begin making them themselves.

Another advantage is that everything is designed to work with the rest of it: you (or the OEM you're buying from) are not cobbling together a bunch of hardware, firmware, and software solutions from disparate vendors and hoping the whole thing works. Think "Apple," or "Sun," rather than "IBM PC Compatible." This is easier for users, as well as allows us to build systems we believe are more reliable.

There's also smaller things, like "as much as possible everything is open source/free software," which matters to some folks (and allows for interesting things like the above blog post to happen!) and is less important to others.


> There are a lot of organizations who want hyperscale style servers but aren't going to start a division to begin making them themselves.

How does this differ from what large players like Dell are offering under the "hyperconverged" moniker. For example, Dell's Vxrail[0] appears (from marketing speak, anyway) to be a single rack with integrated networking and storage that you can ask to "just start a vm".

[0]: https://www.dell.com/en-us/dt/converged-infrastructure/vxrai...


So, "hyperscale" and "hyperconverged" are two different things. Names are hard.

"hyperconverged" is a term used by VMware to describe a virtualized all-in-one platform. You get compute, storage, and networking, all virtualized as one appliance rather than as individual ones. VxRail is basically Dell EMC's implementation of this idea: you get one of their servers, vSAN and vSphere all set up and ready to go.

"hyperscale infrastructure" describes an approach to designing servers to begin with. A lot of folks moved toward commodity hardware in the datacenter a decade or two ago. And then you get more and more of them. The hyperscale approach is kind of top-down as opposed to that bottom-up style: how would we design a data center, not just a server. Don't build one server and then stick thousands of them in a building; think about how to build a building full of servers. This is more of an adjective, like RESTful, rather than a standard, like HTTP 1.1. That being said, the Open Compute Project does exist, but I still think it's closer to a way of thinking about things than a spec.

Okay, so all of that is still a bit fuzzy. But it's enough background to start to compare and contrast, so hopefully it makes a bit more sense.

The first difference is the physical construction of the hardware itself. If you buy VxRail, you're still buying 1U or 2U at a time. With Oxide, you're buying an entire rack. The rack isn't built in such a way that you can just pull out a sled and shove it into another rack; the whole thing is built in a cohesive way. This means that not every organization will want to own Oxide; if you don't have a full rack of servers yet, you don't need something like we offer. But if you're big enough, there's advantages to designing for that scale from the start. This is also what I meant by there not being a place to buy these things; other vendors will sell you a rack, but it's made up of 1U or 2U servers, not designed as a cohesive whole, but as a collection of individual parts. The organizations that are doing it this way are building for themselves, and don't sell their hardware to other organizations. This is also one way in which, in a sense, Oxide and VxRail are similar: you're buying a full implementation of an idea from a vendor. Just the ideas are at different scales.

The other side would be software, which of course is tied into the hardware. With VxRail, you're getting the full suite of software from VMware. You may love that, you may hate it, but it's what you're getting. With Oxide, you're getting our own software stack, which the article is about the details of. You may love that, you may hate it, but it's what you're getting :). That being said, I haven't actually used a full enterprise implementation of the VMware stack, so I don't know to what degree you can mess with things, but our management software is built on top of an API that we offer to customers too, so you can build your own whatever on top of that if you'd like. Another thing here is that, well... the VMware stack is not open source. All our software will be. That may or may not matter to you.

The last bit about software though, is I think a bit more interesting: even though you are buying a full solution from Dell EMC, you're also sort of not. That is, Dell and VMware are two different organizations. Yes, part of what you're getting is that they say they have pre-tested everything in the factory to make sure it all works together well, but at the end of the day, it's still integrating two different organizations' (and probably more) software together. With Oxide, because we're building the whole thing, we can not only make sure things work well together, but really take responsibility for that. We can build deep integrations across the entire stack, and make sure that it not only works well, but is debug-able. Dell EMC isn't building the hypervisor and VMware isn't writing the firmware. Oxide is writing all of it. We think this really matters for both reliability and efficiency reasons.

So... yeah. That's a summary, even though it's already pretty long. Does that all help contextualize the two?


Thanks! That gives me a bit more context.


You're welcome! Sorry you're being downvoted, no idea what's up with that, it's a reasonable question. Sometimes our stuff can seem opaque, but that's because we're mostly focused on shipping right now, rather than marketing. Always happy to talk about stuff, though.


:shrug: That's fine, I would always rather have a conversation. Thanks for your time!


I am very excited about what Oxide is doing, including how the work is being open sourced, and upstreamed.

I also love that they continued to bet on Illumos and am looking forward to the continued growth and development in the Illumos space.


There is still a place for fully integrated and engineered systems. If you have the need for high levels of concurrency, performance, availability, and need to know… not guess… what your opex is going to look like. I’m a user and supporter of cloud providers, but there are some fat, fat margins being booked there. Not every company can have a storage team, network team, and compute team to integrate those things properly.

0xide is one of the few really interesting new tech companies out there.

I have to admit some bias, as I was involved with a company offering a “poor mans vBlock” around 2010. We didn’t grow fast large but we never once lost a deal against commodity hardware vendors. They were easy to beat.


Oxide is hiring in the following areas: electrical engineering, security, embedded, control plane + API, internal systems automation, dev tools, and product design. (I work there.)

https://oxide.computer/careers


Curious... if I'm still in the "triage" bucket after 6 weeks should I assume the ship has sailed? I was really hoping to hear back one way or the other!


No, you will hear back. We're trying really hard to keep to the 6 weeks thing but sometimes we don't succeed.


I would think that startups would want to bias for people that can make good decisions quickly, within a short decision time. Perhaps hardware startups want more conservative employees?

Also in my experience, a fabulous candidate is sometimes only available for a very short time window (they either have become available due to unforeseen circumstances, or they are snapped up by a faster mover).

Is six weeks fast in your opinion?


In general: no, we'd love for things to be faster than six weeks. We just get a tremendous amount of applicants, and we have a lot of work to do. So the queue gets backed up. Hiring is hard.


Not insinuating that I'm a "fabulous" candidate (also totally possible that I am!), but I've been experiencing the opposite. The quickest offers I've received have had the most red flags.

The places I'd like to work seem to move slower and require more preparation. For instance Oxide has applicants put together a lot of written work and another company has a seemingly easy take-home project but which requires unfamiliar (to me) setup that I haven't had time to tackle. Then there are the faang-style interviews with loops scheduled perhaps weeks in the future.

I may send another round of applications soon, but I'm going to be more selective so I can manage the process better.


Please get back to them today, this is a prompt or cue to rescue this one from falling through the cracks. You never know...


Thank you, that does help. I thankfully have some flexibility in my search… and I promise the other replies aren’t alt accounts!


They got back to me after about 5.5 weeks (it was a no, but nicely worded). I think they are just really busy.


All your positions seem to be US-based, though. Or am I missing something?


All the positions on the site currently are remote friendly. I work for Oxide remotely from the UK – you just need a reasonable overlap with PT, I overlap four hours most days.


I'm curious how that works with the one-salary policy.


Everyone is paid the same, regardless of location.


I won't claim to know how salaries work in the US, but from what I know in some other countries, there is an employer overhead to salary, so for a salary of x, the employer is actually lining up x times k, with k larger than 1. However, if you're not in the US, you're either an independent contractor or an employee in a GEO or something of the sort.

In the first case, you would bill something and get a salary out of it. If you bill x, a) your income is in dollars rather than local currency, which from experience is not great, b) you cost less to the company than US employees who cost x times k, c) you have to pay your own overhead before making it a salary. So if you bill x, your salary is less than x, and you still have to pay taxes on that salary.

If GEO or other similar arrangement, you're usually paid in local currency, presumably an amount corresponding to x at a given date, and that has extra overhead for the employer different from what a normal employee would cost.

Either way, "everyone is paid the same, regardless of location" doesn't clarify much. Thus my original question.


Sure, in that case – as I understand it is everyone's pre-tax salary is the same, so your take home pay obviously depends on where you're a resident.

Currently we are using a remote hiring platform with local legal entities, so you'll get paid in local currency and receive local benefits. I'm based in the UK, I don't have the overhead of medical insurance, but the platform itself has costs and I'm not sure how much the costs of a UK employee compare to other countries.

But, we don't have many international employees, so the setup might change in the future or depending on the individual. When I first started, I briefly worked part-time as an independent contractor.


I get that Oxide has a lot of ex-Joyent folks, but I can't help but wonder how much the choice of a Solaris-derived OS will hobble adoption, support for devices, etc. In many ways this feels like SmartOS all over again - a killer concept that will be eclipsed by a technically inferior but more tractable (for contributors, for management) solution.


I thought this too, but if the goal of the control plane software and blade host OS is solely to create VMs and not to actually be a general-purpose OS, this probably doesn't matter as much?


Idk, KVM/libvirt/qemu has worked fine for me. It is very lightweight compared to say VMWare. If I don't want VMs I could use Docker/containerd.

What problem does Oxide solve exactly?


Sell you a few racks of servers, ready to use out of the box, I guess?


Wouldn’t an enterprise go with what they’ve always used? Like Dell, EMC, HP, etc…and most startups will use a cloud provider instead of on prem


If the core idea of Oxide is true and putting together a bunch of PC comparable yourself with a whole bunch of complex and expensive software only to then have a huge maintenance and security nightmare then why wouldn't enterprise want to switch from Dell and co.

The point of a startup is to disrupt the current market. And they are trying. Lets see how they do.


> but I can't help but wonder how much the choice of a Solaris-derived OS will hobble adoption, support for devices, etc.

Does it matter? Oxide is building the whole stack - their own hardware with their own firmware to run their own OS with their own virtualization layer. They don't need support for arbitrary devices, because they control the hardware.


They still have to write a bunch of drivers that they'd get for free with Linux. Clearly they think the tradeoff is worth it but it's not obvious why.


Possible better security/performance through better architecture?


From what I've seen so far the same architecture could be achieved on Linux (e.g. Firecracker or Intel Cloud Hypervisor). To get great performance you often need to get elbow-deep in somebody else's driver and that may be just as much work as writing your own drivers.


We don't need Linux monoculture.


Not everything has to run linux. There's enough of it in IT already.


I really worry about a startup taking a massive bet on their own custom hardware now in 2022. The world was much, much different in December 2019 when Oxide started than it is now. Let's hope the investment cash keeps flowing and the hardware gets to folks that purchased it.


Right now is, in fact, the best time to be betting on custom hardware.

Moore's Law has been dead for a while. Getting "performance" now requires design and architecture again rather than just sitting back for 18 months and letting Moore's Law kill your competitor.

The big problem right now is that custom chip hardware is still too stupidly expensive because of EDA software. Fab runs are sub $50K, but EDA software is greater than 100K per seat and goes up rapidly from that.


Do you really need proprietary EDA tools to get started on designing custom chips? Higher-level design languages like Chisel are showing a lot of potential right now, with full CPU cores being designed entirely in such languages. Of course EDA will be needed once the high-level design has to be ported to any specific hardware-fabbing process, but that step should still be relatively simple since most potential defects in the high-level design will have been shaken out by then.


> Do you really need proprietary EDA tools to get started on designing custom chips?

Yes, actually, you do.

The "interesting" bits in chip design aren't the digital parts--the interesting bits are all analog.

A RISC core is an undergraduate exercise in digital design and synthesis in any HDL--even just straight Verilog or VHDL. It's a boring exercise for anyone with a bit of industry experience as we have infinite and cheap digital transistors. (This is part of the reason I regard RISC-V as a bit interesting but not that exciting. It's fine, but the "RISC" part isn't where we needed innovation and standardization--we needed that in the peripherals.)

However, the interfaces are where things break down. Most communication is now wireless (WiFi, BLE, NB-IoT) and that's all RF (radio frequency) analog. Interfacing generally requires analog to digital systems (ADCs and DACs) and those are, obviously, analog. Even high-speed serial stuff requires signal integrity and termination systems--all of that requires parasitic extraction for modeling--yet more analog. And MEMS are even worse as they require mechanical modeling inside your analog simulation.

If your system needs to run on a coin cell battery, that's genuinely low power and you are optimizing even the digital bits in the analog domain in order to cut your energy consumption. This means that nominally "digital" blocks like clocks and clock trees now become tradeoffs in the analog space. How does your debugging unit work when the chip is in sleep?--most vendors just punt and turn the chip completely on when debugging but that screws up your ability to take power measurements. And many of your purely digital blocks now have "power on/power off" behavior that you need to model when your chip switches from active to sleep to hibernate.

All this is why I roll my eyes every time some group implements "design initiatives" for "digital" VLSI design--"digital" VLSI is "mostly solved" and has been for years (what people behind these initiatives are really complaining about is that good VLSI designers are expensive--not that digital VLSI design is difficult). The key point is analog design (even and especially for high performance digital) with simulation modeling along with parasitic extraction being the blockers. Until one of these "design initiatives" attacks the analog parasitic extraction and modeling, they're just hot air. (Of course, you can turn that statement around and say that someone attacking analog parasitic extraction means they are VERY serious and VERY interesting.)


> It's a boring exercise for anyone with a bit of industry experience as we have infinite and cheap digital transistors.

Having "infinite and cheap" transistors is what makes hardware design not boring. It means designs in the digital domain are now just as complex as the largest software systems we work with, while still being mission-critical for obvious reasons (if the floating point division unit you etched into your latest batch of chips is buggy and getting totally wrong results, you can't exactly ship a software bugfix to billions of chips in the field). This is exactly where we would expect shifting to higher-level languages to be quite worthwhile. Simple RISC cores are neither here nor there; practical multicore, superscalar, vector, DSP, AI etc. etc. is going to be a lot more complex than that.

Complicated analog stuff can hopefully be abstracted out as self-contained modules shipped as 'IP blocks', including the ADC and DAC components.


Why? If anything, commodity/non-custom hardware is what's hurting right now. Fat margins on hardware imply a kind of inherent flexibility that can be used to weather even extreme shocks.


There are plenty of commodity chips that go into making a full server rack. If any little power regulator, etc. is backordered for months and years it's just more unexpected pain. And that's before we even get to the problems of entire factories shutting down, just look at what's happening to Apple & Foxconn of all companies in Shenzhen this week. If the big players are struggling the small fries are in for pain too.


The supply chain crisis is very, very real, but we are blessed with absolutely terrific operations folks coming from a wide range of industrial backgrounds (e.g., Apple, Lenovo, GE, P&G). They have pulled absolute supply chain miracles (knocking loudly on wood!) -- but we have also had the luxury of relatively small quantities (we're not buying millions of anything) and new design, where we can factor in lead times.

tl;dr: Smaller players are able to do things that larger players can't -- which isn't to minimize how challenging it currently is!


Just curious, are you all working out of the same place or all remote? Curious about hardware startups and how that works. Thanks


We have an office, but many people aren't in the Bay Area (myself included). Not everyone is doing hardware, and some folks who do have nice home setups they enjoy working with. It's a spectrum, basically.


Thanks steve


As someone who got massively excited by SmartOS, only to see adoption never reach even the minimal levels I hoped for - yes, I hear you.

What would the hosting story look like now if 8(?) years ago 25% of servers had adopted SmartOS?


I got massively excited by SmartOS, too. Coming from a vSphere and Hyper-V world I found it to be a joy to use. The way that Illumos zones leverage ZFS is really, really cool. The tooling in SmartOS was very nice to use, too.

I never used it in production anywhere, admittedly. I also never got a chance to try out Triton. I'm on the fence about whether or not I keep my SmartOS hosts in my home network now that Illumos is a second-class citizen when it comes to OpenZFS.


We run Triton and individual SmartOS boxes in production. They have pretty much replaced all of our VMWare and linux-based hypervisors save some very specific use cases, mostly relating to GPU passthrough and NVIDIA.

The case with OpenZFS does worry me as well. I fear the developers start slowly introducing linuxisms thus sacrificing portability and stability for the great penguin.


We also run Triton on our public cloud @mnx.io

I would love to hear more about the replacement of VMware experience and any other Triton details good/bad.


Anything specific you'd like to know?

Generally there hasn't been anything game breaking and our biggest issues have been with running out of logging space for the core services (went with too low capacity disks in the beginning, doh) and one failed upgrade which didn't even take down the whole cluster during fixing. Our clients can now also provision their own VMs with the Triton API unlike with VMware which required admins to do it. Bhyve like KVM before has also been rock solid in everything we run from simple web servers to kubernetes clusters.

For issues we've found the Joyent/SmartOS IRC channels to be excellent and they have helped us tremendously in debugging and fixing things. It's the best support I've encountered for a FOSS product by far and one of the biggest reasons I'm such an illumos advocate now.


This is great information, and we've had similar experiences.

I'm also looking forward to further testing LinuxCN (https://github.com/joyent/linux-live/tree/linuxcn) on Triton in the near future!

Are you running Manta (https://github.com/joyent/manta) for anything? If so, is that meeting you needs for object storage?


Not running Manta at the moment but it's on the menu. The demand for object storage hasn't been high enough to justify the spent time to set it up for now.

Not sure about linuxCN either. I would rather run !linux on bare metal as much as possible, and having purged most of the penguins from infrastructure it would feel a bit weird to go immediately back ;)

In addition, we also have SPARC hardware running on OpenBSD due great hardware support, including ldoms! Of course would be nice to run illumos on them too, but alas...


If the Oxide stack is good someone could make a name for themselves by porting it to Linux to get wider hardware support.


It might make even more sense to run a cutting-edge distributed OS on the actual Oxide hardware. With rack-scale platforms like this it could be feasible to do SSI with distributed memory across multiple "nodes". Current "cloud" platforms like Kubernetes are already planning on including support for automated checkpointing and migration, which is sort of the first step prior to going fully SSI.


I miss IRIX, too. :)


Depends how deeply integrated it is; if nothing else, I suspect that bhyve and kvm have sufficiently different APIs that it would be at least quite annoying to paper over the differences.


Only tangentially related: what I find strange about "Oxide" (styled as "0xide" - the first character is a zero) is that they got very close to actually having a valid hex number in C/C++/Rust notation (0x1de) as a logo, but stopped short...


It still amazes me that Oxide actually managed to grab the perfect PCI vendor ID

https://pcisig.com/membership/member-companies?combine=01de


That's fantastic. And here I thought that Intel's vendor ID of 0x8086 was cute.


yes it's crazy!


Unrelated: But anyone remember xoxide.com? The computer building site from the early 2000s? One of my favorite sites of all time.


Yes. The company still exists as Turn5; they pivoted to cars.


Is it weird that I'm insanely excited about Oxide as a product, even though I have absolutely no need or use case for it?


I'm a huge fanboy of Oxide, hope they succeed, the world needs more of this.

I'm very sad about what Silicon Valley has become. We speak of "tech companies" but they mostly no longer exist. What are the big names in Silicon Valley now? Advertising companies, for the most part. A movie company. Online shopping. Social whatevers. None of these companies sell tech products, they are not tech companies. Sure, they use tech internally but so do law offices and supermarkets, those aren't tech companies either.

I miss the Silicon Valley of actual tech companies. Sun, SGI, HP (the actual tech HP of back then), etc. Apple survives, but focused on consumer-level stuff which I don't find interesting. Oracle is around, but they were always more of a lawyer shop than a tech company. Real hardcode tech companies, do any exist anymore?

Oxide is such fresh air, exciting!

Every week or so I'm about to send an application, I really want to work there. My partner would kill me though, so I haven't. (They have a flat pay scale that, when living in Silicon Valley, would make it very difficult to support a family.. so I'm stuck cheering from the sidelines.)


I'm enthusiastic about their products and the company in general, too. I don't often feel like I'd like to be an employee, but Oxide sounds like it would a very exciting gig (but I lack any skill set to remotely justify even contacting them-- I don't think they're looking for heavily opinionated Windows / Linux sysadmins >smile<).

Their gear is targeted at way larger-scale than I'll ever get to use (what with the size of environments I work in). What I hear about their attitudes re: firmware, for example, makes me wish that I could have their gear instead of the iDRAC's, PERCs, and other closed-source roach-motel hardware I'm stuck with.

I'm young enough that I just missed the era of computers that Oxide evokes. I put in a couple DEC Alpha-based machines in the late 90s and got a glimpse of what it might be like to have a vendor who provides a completely integrated hardware/software stack and "ecosystem". I'm sure there was operational advantage to being a "DEC shop" or a "Sun shop". The PC market crushed that old school model by wringing out the margin necessary to make that kind of company work. I'd love to see Oxide make a go of it, though.


They're a tech company that makes actual technology. You're allowed to be excited. Better technology upstream has a habit of floating down the stack in some form or another, even when lawyers make it hard (ZFS was released under a GPL-incompatible license (some people will argue intentionally), yet has influenced over twenty years of filesystem design in the Linux ecosystem for things like btrfs, coincidentally also owned from an IP standpoint largely by Oracle, for example).

Who knows? In twenty years, we could see something cool come out of this, like better U-Boot tooling. Or maybe they'll be purchased by Oracle, which would if nothing else be funny this time.


That is the really cool thing about them-- they're actually making new computers and the software and firmware to go with them.

Everything else "new" seems to be a rehash of the IBM PC (my "server" has an ISA-- ahem-- "LPC" bus... >sigh<). It's so refreshing to see something actually new.

The same goes with software and firmware. Any "new" systems software the last 10 years seems to be thin management veneers over real technologies like the Linux kernel, KVM and containers, GNU userland, etc. And it all ends up running on the same cruddy BMC's, "lights-out" controllers, embedded RAID controllers, etc.

I get a little bit of excitement at ARM-based server platforms (and RISC-V, for that matter) but everything there seems to be at even less of an "enterprise" level (from a reliability, serviceability, and management perspective) than the PC-based servers I already loathe.


KVM and containerization are not just "thin management veneers", they enable all sorts of new features.


I'm sorry I wasn't clear. KVM and containers are the technology. The "new" stuff I'm talking about are thin management veneers over these features.


Strictly speaking, kernel-level namespaces are the technology. "Containers" are a pattern based on kernel-level namespaces, and "thin management veneers" help make sense of the underlying technology and implement that pattern.


> Or maybe they'll be purchased by Oracle, which would if nothing else be funny this time.

Oracle will likely be very interested in Oxide, but I suspect Bryan Cantrill would do everything in his power to prevent that happening. He's seen the lawn mower in action before and knows not to anthropomorphize it :)


Not at all; they're building a cool tech stack, but the only thing they sell is super expensive hardware that no individual - and not even that many businesses! - is likely to be able to afford.


So, the only thing really inherent about our price point is that we're selling compute by the rack: as it turns out, a whole rack of server-class CPUs (and its accompanying DRAM, flash, NICs, and switching ASICs) is pretty expensive! But this doesn't mean that it's a luxury good: especially because customers won't need to buy software separately (as one does now for hypervisor, control plane, storage software, etc.), the Oxide rack will be very much cost competitive with extant enterprise solutions.

Cost competitive as it may be, it doesn't mean that it hits the price point for a home lab, sadly. One of the (many) advantages of an open source stack is allowing people to do these kinds of experiments on their own; looking forward to getting our schematics out there too!


It also turns out that not many people have 3-phase power and can support a heat/power load of 15kW in their homes ;)


I actually suspect it would be a lot easier to support 15kW of power in my home than 15 kW of cooling.

I know several people with 2x 240V 32A 3-phase in their garage, that's 20+ kW at any reasonable power factor. But a 15 kW cooler that would work in summer would annoy the hell out of any neighbours living closer than a mile.


Simple solution: Turn those neighbours into shareholders and they can sleep to the sound of money all summer long :)


Where does this leave companies that would like to take advantage of fully integrated software and hardware (yes, intentionally referring to your old project at Sun), but don't need a full rack's worth of computing power (and maybe never will), and don't have the in-house skills to roll their own? Or do you think that what you're selling really only has significant benefits at a large scale?


I think the intention is that those people are better served with consolidated cloud providers? -- or even single digits of physical colocated servers.

It would be nice to have a known pricepoint from a cloud provider which once exceed you ask the question: "Should we buy a rack and COLO it?" Even if the answer is "no" it's still good to have that option.

---

The thing is: Datacenter technology has moved on from 2011 (when I was getting into Datacenters), but only for the big companies. (Google, Facebook, Netflix); I think Oxide is bringing the benefits of a "hyperscale" deployment to "normal" (IE; single/double-digit rack) customers.

Some of those things such as much more efficient DC converters, so not every machine needs to do it's own AC/DC conversion.


What's kind of messed up, at least for tiny companies like mine, is that renting an ugly PC-based dedicated server from a company like OVH is currently cheaper than paying for the equivalent computing power (edit: and outgoing data transfer) from a hyperscale cloud provider like AWS, even though the hyperscalers are probably using both space and power more efficiently than the likes of OVH. My cofounder will definitely not get on board with paying more to get the same (or less) computing power, just for the knowledge that we're (probably) using less energy. I don't know what the answer is; maybe we need some kind of regulation to make sure that the externalities of running a mostly idle box are properly factored into what we pay?


> renting an ugly PC-based dedicated server from a company like OVH is currently cheaper than renting the equivalent computing power from a hyperscale cloud provider like AWS

That's not surprising, you're basically paying for scalability. An idle box doesn't even necessarily "waste" all that much energy if it's truly idle, since "deep" power-saving states are used pretty much everywhere these days.


Sure, the CPU may enter a power-saving state, but presumably for each box, there's a minimum level of power consumption for things like the motherboard, BMC, RAM, and case fan(s). The reason why AWS bare-metal instances are absurdly expensive compared to OVH dedicated servers is that AWS packs more computing power into each box. So for each core and gigabyte of RAM, I would guess AWS is using less power (edit: especially when idle), because they don't have the overhead of lots of small boxes. Yet I can have one of those small boxes to myself for less than I'd have to pay for the equivalent computing power and bandwidth from AWS.


Interestingly, I believe that unused DIMM modules could be powered down if the hardware bothered to support that. Linux has to support memory hotplug anyway because it's long been in use on mainframe platforms, so the basic OS-level support is there already. Since it's not being addressed in any way by hardware makers, my guess is that RAM power use in idle states is low enough that it basically doesn't matter.


RAM uses the same amount of power under high load as low load due to the way it is constantly refreshing the contents.

Each stick of DDR4 is going to consume on the order of 1.2w (idle CPUs can theoretically go lower than this).

I’d rather shut a whole machine down than go to the effort of offlining individual DIMMs, since the consumption is so low and quite static.


You’re amortising a lot of software developers and sysadmins with your AWS bill. It’s also in-trend so a bit premium.

They’re not reasonably equivalent. But I don’t doubt that Amazon is laughing to the bank still.


Nope, it's proof that their marketing team is doing a great job.


Especially because there isn't one!


Well, on paper:) You, personally, are an amazing marketing department no matter what your official title is; "The Soul of a New Machine", for instance, is brilliant at getting mindshare. To be fair, I'm fairly sure you don't think of what you're doing as marketing, but the only difference I see is that this is much more natural/sincere than 99.99% of similar efforts - you're actually just that passionate and good at sharing your passion.


Ha, entirely fair! When we raised our initial round, we said that our podcasting microphones were the marketing department, which proved prophetic.


This is low key the Developer Relations playbook! I heard about Oxide thru the awesome On the Metal podcast y'all started :]


I am exited about Hubris OS and a few of the other bits.

But my secret hope his to eventually get an Oxide branded developer laptop. The design language alone would make it worth it. But not holding my breath as that clearly not their priority.


Same here!


Glad to see oxide on HN. Sick stack.


I've not looked at Illumos distros seriously in a long time, but the reason given for rejecting OmniOS seems really strange to me. My general impression was that OmniOS was a tight, server oriented distro that had a sane release schedule and up to date security patches. Who cares if they use the word "enterprise" in their marketing copy?


I use vm-bhyve on FreeBSD current. The little bit of propolis described in the post reminds me of it a bit, but in rust (with a service/api interface) instead of just cli and shell. Sounds neat!

I wonder how hard it would be to port to FreeBSD.


There's some changes they have made to bhyve that would need to get ported to FreeBSD first.


Kudos. I've been looking at that stuff but have lacked the bandwidth to even think about getting it to run (wish I could, really).


I wonder if that stack would run on a HPe bladesystem. Definitely something to try out after hours...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: