I'm a huge fanboy of Oxide, hope they succeed, the world needs more of this.
I'm very sad about what Silicon Valley has become. We speak of "tech companies" but they mostly no longer exist. What are the big names in Silicon Valley now? Advertising companies, for the most part. A movie company. Online shopping. Social whatevers. None of these companies sell tech products, they are not tech companies. Sure, they use tech internally but so do law offices and supermarkets, those aren't tech companies either.
I miss the Silicon Valley of actual tech companies. Sun, SGI, HP (the actual tech HP of back then), etc. Apple survives, but focused on consumer-level stuff which I don't find interesting. Oracle is around, but they were always more of a lawyer shop than a tech company. Real hardcode tech companies, do any exist anymore?
Oxide is such fresh air, exciting!
Every week or so I'm about to send an application, I really want to work there. My partner would kill me though, so I haven't. (They have a flat pay scale that, when living in Silicon Valley, would make it very difficult to support a family.. so I'm stuck cheering from the sidelines.)
I'm enthusiastic about their products and the company in general, too. I don't often feel like I'd like to be an employee, but Oxide sounds like it would a very exciting gig (but I lack any skill set to remotely justify even contacting them-- I don't think they're looking for heavily opinionated Windows / Linux sysadmins >smile<).
Their gear is targeted at way larger-scale than I'll ever get to use (what with the size of environments I work in). What I hear about their attitudes re: firmware, for example, makes me wish that I could have their gear instead of the iDRAC's, PERCs, and other closed-source roach-motel hardware I'm stuck with.
I'm young enough that I just missed the era of computers that Oxide evokes. I put in a couple DEC Alpha-based machines in the late 90s and got a glimpse of what it might be like to have a vendor who provides a completely integrated hardware/software stack and "ecosystem". I'm sure there was operational advantage to being a "DEC shop" or a "Sun shop". The PC market crushed that old school model by wringing out the margin necessary to make that kind of company work. I'd love to see Oxide make a go of it, though.
They're a tech company that makes actual technology. You're allowed to be excited. Better technology upstream has a habit of floating down the stack in some form or another, even when lawyers make it hard (ZFS was released under a GPL-incompatible license (some people will argue intentionally), yet has influenced over twenty years of filesystem design in the Linux ecosystem for things like btrfs, coincidentally also owned from an IP standpoint largely by Oracle, for example).
Who knows? In twenty years, we could see something cool come out of this, like better U-Boot tooling. Or maybe they'll be purchased by Oracle, which would if nothing else be funny this time.
That is the really cool thing about them-- they're actually making new computers and the software and firmware to go with them.
Everything else "new" seems to be a rehash of the IBM PC (my "server" has an ISA-- ahem-- "LPC" bus... >sigh<). It's so refreshing to see something actually new.
The same goes with software and firmware. Any "new" systems software the last 10 years seems to be thin management veneers over real technologies like the Linux kernel, KVM and containers, GNU userland, etc. And it all ends up running on the same cruddy BMC's, "lights-out" controllers, embedded RAID controllers, etc.
I get a little bit of excitement at ARM-based server platforms (and RISC-V, for that matter) but everything there seems to be at even less of an "enterprise" level (from a reliability, serviceability, and management perspective) than the PC-based servers I already loathe.
Strictly speaking, kernel-level namespaces are the technology. "Containers" are a pattern based on kernel-level namespaces, and "thin management veneers" help make sense of the underlying technology and implement that pattern.
> Or maybe they'll be purchased by Oracle, which would if nothing else be funny this time.
Oracle will likely be very interested in Oxide, but I suspect Bryan Cantrill would do everything in his power to prevent that happening. He's seen the lawn mower in action before and knows not to anthropomorphize it :)
Not at all; they're building a cool tech stack, but the only thing they sell is super expensive hardware that no individual - and not even that many businesses! - is likely to be able to afford.
So, the only thing really inherent about our price point is that we're selling compute by the rack: as it turns out, a whole rack of server-class CPUs (and its accompanying DRAM, flash, NICs, and switching ASICs) is pretty expensive! But this doesn't mean that it's a luxury good: especially because customers won't need to buy software separately (as one does now for hypervisor, control plane, storage software, etc.), the Oxide rack will be very much cost competitive with extant enterprise solutions.
Cost competitive as it may be, it doesn't mean that it hits the price point for a home lab, sadly. One of the (many) advantages of an open source stack is allowing people to do these kinds of experiments on their own; looking forward to getting our schematics out there too!
I actually suspect it would be a lot easier to support 15kW of power in my home than 15 kW of cooling.
I know several people with 2x 240V 32A 3-phase in their garage, that's 20+ kW at any reasonable power factor. But a 15 kW cooler that would work in summer would annoy the hell out of any neighbours living closer than a mile.
Where does this leave companies that would like to take advantage of fully integrated software and hardware (yes, intentionally referring to your old project at Sun), but don't need a full rack's worth of computing power (and maybe never will), and don't have the in-house skills to roll their own? Or do you think that what you're selling really only has significant benefits at a large scale?
I think the intention is that those people are better served with consolidated cloud providers? -- or even single digits of physical colocated servers.
It would be nice to have a known pricepoint from a cloud provider which once exceed you ask the question: "Should we buy a rack and COLO it?" Even if the answer is "no" it's still good to have that option.
---
The thing is: Datacenter technology has moved on from 2011 (when I was getting into Datacenters), but only for the big companies. (Google, Facebook, Netflix); I think Oxide is bringing the benefits of a "hyperscale" deployment to "normal" (IE; single/double-digit rack) customers.
Some of those things such as much more efficient DC converters, so not every machine needs to do it's own AC/DC conversion.
What's kind of messed up, at least for tiny companies like mine, is that renting an ugly PC-based dedicated server from a company like OVH is currently cheaper than paying for the equivalent computing power (edit: and outgoing data transfer) from a hyperscale cloud provider like AWS, even though the hyperscalers are probably using both space and power more efficiently than the likes of OVH. My cofounder will definitely not get on board with paying more to get the same (or less) computing power, just for the knowledge that we're (probably) using less energy. I don't know what the answer is; maybe we need some kind of regulation to make sure that the externalities of running a mostly idle box are properly factored into what we pay?
> renting an ugly PC-based dedicated server from a company like OVH is currently cheaper than renting the equivalent computing power from a hyperscale cloud provider like AWS
That's not surprising, you're basically paying for scalability. An idle box doesn't even necessarily "waste" all that much energy if it's truly idle, since "deep" power-saving states are used pretty much everywhere these days.
Sure, the CPU may enter a power-saving state, but presumably for each box, there's a minimum level of power consumption for things like the motherboard, BMC, RAM, and case fan(s). The reason why AWS bare-metal instances are absurdly expensive compared to OVH dedicated servers is that AWS packs more computing power into each box. So for each core and gigabyte of RAM, I would guess AWS is using less power (edit: especially when idle), because they don't have the overhead of lots of small boxes. Yet I can have one of those small boxes to myself for less than I'd have to pay for the equivalent computing power and bandwidth from AWS.
Interestingly, I believe that unused DIMM modules could be powered down if the hardware bothered to support that. Linux has to support memory hotplug anyway because it's long been in use on mainframe platforms, so the basic OS-level support is there already. Since it's not being addressed in any way by hardware makers, my guess is that RAM power use in idle states is low enough that it basically doesn't matter.
Well, on paper:) You, personally, are an amazing marketing department no matter what your official title is; "The Soul of a New Machine", for instance, is brilliant at getting mindshare. To be fair, I'm fairly sure you don't think of what you're doing as marketing, but the only difference I see is that this is much more natural/sincere than 99.99% of similar efforts - you're actually just that passionate and good at sharing your passion.
I am exited about Hubris OS and a few of the other bits.
But my secret hope his to eventually get an Oxide branded developer laptop. The design language alone would make it worth it. But not holding my breath as that clearly not their priority.