Hacker Newsnew | past | comments | ask | show | jobs | submit | dmlittle's commentslogin

It's worth noting that GPUs have a much higher failure rate than traditional CPUs. Over 10x the failure rate due thermal stress. The amount of heat generated is very different. You can't really replace a GPU in a satellite (at least today?) which would place most of these satellites as space debris in a ~5 year horizon.

Usually satellites utilize an older node as newer nodes are easily bit-flipped by radiation. And blocking radiation is heavy.

AI calcs may handle wrong calculations better than cpus where software will tend to panic.


Which is the same lifetime as a starlink sat

So what exactly is the benefit of having that thing in orbit then, where it costs you millions of dollars to put it there?

Self destruction is a feature, not a bug.

That said eventually they can be lifted to higher orbits and have robots deliver and swap updated compute (if not made in space itself!).


The current bottleneck on compute is power and zoning. Solar panels are 5x more efficient in space, and there is no zoning in space.

The current bottleneck is silicon. Every chip that is manufactured gets housed and powered. (It makes sense: the cost of compute is dominated by capex, the power costs are irrelevant, so they're ok paying a premium for power).

The space data center hypothesis relies on compute supply growing faster than power supply. (Both are bottlenecked on parts of the supply chain that will take ages to scale.)

Even if you believe that's the case, the point at which orbital data centers start making sense is incredibly sensitive to the exact growth rates.


The current bottleneck is not silicon. There is plenty of silicon locked up in previous gen GPUs that are no longer efficient enough to run relative to newer models. The bottleneck is the economics of owning the older GPU models - which is why all the GPU neoclouds are gonna go bust unless they can get customers to continue renting old GPUs.

The economics are vastly different when opex is near zero for these things


All of that is incorrect.

H100 rental prices are still as high as when the cards were brand new. The prices vastly exceed the power costs.

In a world where power or DC permits are the current bottleneck those H100s would be getting retired in favor of Blackwells. But they aren't. They are instead being locked in for years long contracts.


Why exactly would the H100s get retired for Blackwells if specifically power and DC permits were the bottleneck?

Because you'd need to trash the old GPUs in order to make room for new GPUs. Right now new GPUs get online mostly in new DCs. TSMC fab capacity is much more limiting than DC building and it will likely keep being the case. It's much easier to build a DC than a fab.

Because they are >10x more power efficient.

If silicon were relatively abundant and power/DC space scarce, you'd get an order of magnitude more bang for the Watt by replacing the H100s with newer GPUs.

But nobody is doing that. Blackwells are being installed as additional capacity, not Hopper replacements.

So it is pretty clear that silicon is the primary bottleneck.


Millions of dollars? Where did you get that number from?

...how much do you think each rocket launch costs?

Not millions of dollars per sat. Are you being intentionally obtuse?

Are you intentionally misreading what I'm saying?

It's been a while since I've checked this but a few years ago we tried to limit test kine on a large-ish cluster and it performed pretty poorly. It's fine for small clusters but the way they have to implement the watch semantics makes it perform poorly (at least this was the case a few years ago).


Agreed. The subscriptions really is a huge huge part of the magic, and it's a weakpoint of Kine. Thanks for chiming in.

Ideally, i'd love to see a database specific offering. Use postgres async replication (ideally somehow sharded so there's not a single consumer node) to some fan out system that's doing all the watching.

But etcd mostly does the job, seems unlikely to be going anywhere. It's be cool though.


The node failure rate is much higher than that. On a 1M node cluster of cloud-managed instances (AWS, GCP, Azure, etc.) you'd likely see failures a few times a month, if not more.


Yep. And the chances that the DB node with the control plane fails are therefore less than one in ten thousand.


It depends on how much effort you put into it vs. just using any of the base templates/components that Tailwind Plus (previously Tailwind UI) has to offer.

If you look at their Showcase section[1], you can't tell it's using TailwindCSS for most of them (imo).

[1] https://tailwindcss.com/showcase


Jeremy Rubin built a proof-of-concept for this over a decade ago for a hackathon and ended up being sued by the state of New Jersey. This blog post[0] has a good summary of the events.

[0] https://ethanzuckerman.com/2015/05/28/the-death-of-tidbit-an...


This guy even said as an alternative to ad-revenue, I felt so clever lol.

Crypto is much more known than when that occurred. Wouldn’t surprise me if something like this would still get sued though.


Just an (uneducated) guess here, I don't believe capsules carry the necessary amount of fuel that it would take to accomplish this.


The manned capsules, at least, have a launch abort system that was probably originally intended to evolve into landing-capable engines. They even released renders of it years ago.

https://en.wikipedia.org/wiki/SuperDraco

In fact, they can currently be used for propulsive landing… if the parachutes fail. https://www.nasaspaceflight.com/2024/10/dragon-propulsive-la...


It depend on the UUID version you're using. Version 4 (Random) will always have that value be 4 as per RFC 9562. So 99999999-9999-9999-9999-999999999999 is a valid UUID but not a valid UUID v4. If you wanted to be pedantic the website should have been named https://everyuuidv4.com/

https://datatracker.ietf.org/doc/html/rfc9562


The last line of https://xkcd.com/566/, except it's UUID formats.


Are you suggesting we should never have made the random one, and stuck with mac address plus timestamp forever?


I think object identifiers would be better, althoug they should add another arc that does not require registration, based on: (fixed prefix).(type of identifier).(number of days past epoch).(parts according to type of identifier).(optional extra parts). (I had partially written my proposal, and I would want ITU and/or ISO (preferably ITU) to approve it and then manage it.) For example, type 0 could mean international telephone numbers, type 1 could mean version 4 IP address, type 2 could mean domain names (encoding each part as bijective base 37, from right to left), 3 could mean a combination of geographic coordinates with radio frequencies, 4 could mean telephone numbers with auto-delegated telephone extensions, etc. (I had also considered such things as automatic delegation, clock drift, etc; it is more carefully considered than UUID and some other types of identifiers.)


Sounds like you're in the realm of URNs? I don't know about that description, I think there's a benefit to a short and fixed-size ID. Though maybe for the domain name example you could have an alternate form that hashes any domain that goes over 20-30 characters.


I actually believe we shouldn't have made any of them


Oh okay. That's a pretty different suggestion from the comic.

Would you suggest random 128 bit numbers, then? Otherwise it's hard to see what else would serve the same role without being UUID in a trenchcoat. And having identifiers is important.


Yeah I would really like if UUID was just 128 bits of randomness and nothing else. The whole version thing sucks, and my point (which you are right in that the ordering is a little off) is that UUIDv4 is the only good one and the rest basically should not exist. UUIDv4 itself is ruined by the fact that it needs to have a version embedded in it because the other ones exist.


> C) Only one staging env per customer. Want to check what a new setting will do? Every developer is getting that setting turned on.

Stripe Sandboxes[1] aim to solve this problem!

(Disclaimer: I work for Stripe but not on this feature)

[1] https://docs.stripe.com/sandboxes


"up to six" is definitely an improvement, but still a long way from "ephemeral test environments on demand".


There's some explicit differences with Python[1]. My understanding is that Starlark was specifically created for Bazel so if I had to guess it's to enforce the immutability of values between contexts.

[1] https://bazel.build/rules/language#differences_with_python


Thank you for TA'ing the class! I took it in 2015 and TAs really made the class for myself and most of my friends.

Is the end of semester Leisserchess competition still going? I believe I heard that the year you TA'd it (might have been a year before or after) a group finally compiled an opening playbook and beat everyone in the class


I haven’t followed the course closely, but I’d assume there’s still some variation of the game going on, Charles was quite fond of it.

I’m not sure what you mean by “finally complied an opening book”, by the time I was involved with the class opening books were table stakes and we even included code to generate them as part of the starting distribution. They’re somewhat useful, but effective culling of the search space in multithreaded contexts is far more important. The opening book can only ever help for the first few ply, after that the engine which can consistently think a move ahead will likely win. (Indeed the staff designed the rules to optimize for that very property). A quick and informative heuristic function is of course also critical.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: