Hacker Newsnew | past | comments | ask | show | jobs | submit | bombela's commentslogin

link-local is mandatory for ipv6 to work. Technically everybody you have ever seen is using it. It is unlikely that you know somebody without a cellphone. And as far as I know, all cellphone networks are ipv6 first.

https://en.wikipedia.org/wiki/Link-local_address#IPv6


Did you reply to the wrong comment? I know link-local works in IPV6, we were discussing IPv4.

The entry point of interest is probably ptrace: https://man7.org/linux/man-pages/man2/ptrace.2.html


> But these are plastics that fail under heat.

All materials ultimately succomb when exposed long enough at some high enough temperature.

What is the temperature range to match here?


> All materials ultimately succomb when exposed long enough at some high enough temperature.

I'm not a material scientist, but I don't believe that to be true. Metals don't to my knowledge; they suffer oxidation, which is allayed by the presence of oil.

If you mean plastics in particular, then PEEK would be ideal to my knowledge - it's suitable for immersion in gasoline and similar solvents, and I've used it in the past for a fuel pump mounting bracket that sits inside the fuel tank of a (gasoline) vehicle. I checked it after a year and it doesn't seem to be any worse for wear.

It's just a huge pain to print!

> What is the temperature range to match here?

I'm not sure, and likely couldn't be sure without a fair amount of research. If I had to print this for a plane, I'd want to do that and measure temperature in use and under high load and destructively test several drafts to ensure performance.

From what I've seen in this instance though, the failed part showed a Tg (glass transition temperature) of 55ºC - basically exactly that of PLA-CF. The pilot believed it was ABS-CF, which has a Tg of ~100ºC. If we assume that 100ºC was at least higher than the expected operating temperature, PEEK (Tg: 143ºC) would have given a ~50% safety margin.


Yep, and they also silently downgrade resolution and audio channels on an ever changing and hidden list of browsers/OS/device overtime.

Meanwhile pirated movies are in Blu-ray quality, with all audio and language options you can dream of.


> * 5-15ms downtime to backup a live sqlite db with a realistic amount of data for a crud db

Did you consider using a filesystem with atomic snapshots? For example sqlite with WAL on BTRFS. As far as I can tell, this should have a decent mechanical sympathy.

edit: I didn't really explain myself. This is for zero downtime backups. Snapshot, backup at your own pace, delete the snapshot.


If it’s at 5-15ms of downtime already, you’re in the space where the “zero” downtime FS might actually cause more downtime. In addition to pauses while the snapshot is taken, you’d need to carefully measure things like performance degradation while the snapshot exists (incurring COW costs) and while it’s being GCed in the background.

Also, the last time I checked the Linux scheduling quanta was about 10ms, so it’s not clear backups are going to even be the maximum duration downtime while the system is healthy.


I am not so sure you know what you are talking about. Feel free to provide some reading material for my education.

Why would the scheduler tick frequency even matters for this discussion. Even on a single cpu/core/thread system. For what is worth, the default scheduler tick rate has been 2.5ms since 2005. Earlier this year somebody proposed switching back to 1ms.

https://btrfs.readthedocs.io/en/latest/dev/dev-btrfs-design.... https://docs.kernel.org/admin-guide/pm/cpuidle.html https://docs.redhat.com/en/documentation/red_hat_enterprise_... https://sqlite.org/wal.html#ckpt https://www.phoronix.com/news/Linux-2025-Proposal-1000Hz


Well, I haven’t checked it for a while. Still, try measuring FS latencies during checkpoints, or just write a tight loop program that reads cached data and prints max latencies once an hour. Use the box for other stuff while it runs.


https://github.com/accretional/collector/blob/main/pkg/colle...

There is no way btrfs can be slower than this in any shape or form.

If we are comparing something simpler. Like making a copy of the SQLite database periodically. It makes sense for a COW snapshot to be faster than copying the whole database. After reading the btrfs documentation, it seems reasonable to assume that the snapshot latency will stay constant, while a full copy would slow down as the single file database grows bigger.

And so it stands of reason that freezing the database during a full copy is worse than freezing it during a btrfs snapshot. And a full copy of the snapshot can then be performed, optionally with a lower IO priority for good measure.

It should be obvious that the less data is physically read or written on the hot path, the less impact there is on latency.

For what is worth, here is a benchmark comparing IO performance on a few linux filesystem. Including some sqlite tests. https://www.phoronix.com/review/linux-615-filesystems


Ovens, induction cooking, electric car charging, dryer etc is already 240V at high amperage. With a dedicated circuit.

EU also mandates dedicated circuits for big appliances, so there is no difference in practice.

The two things I can think of are electric kettle and a raclette machine.

Tools are mostly battery powered those days. A home workshop would most likely be wired in 240 or three phases anyways.

What else are you missing?


Alas, my workshop didn't come with 240 already run, so that was an added expense to get my welder set up.

An electric tea kettle that didn't take an hour to warm up would be very nice.

My well pump runs on 120v, and when the motor kicks in the whole house knows.

240v has lower voltage drop over distances, puts off less heat due to lower amperage for the same wattage, and since we're dreaming, we could switch over to a sane plug design like Type F or G instead of A and B.


> An electric tea kettle that didn't take an hour to warm up would be very nice.

I've been using electric kettles in north america and whilst they take longer, we're talking 5 minutes not an hour.

Some hyperbole can be appropriate but you're just being disingenuous here, or you've never actually used a kettle.


This thread warms my heart. Rust has set a new baseline that many and myself now take for granted.

We are now discussing what can be done to improve code correctness beyond memory and thread safety. I am excited for what is to come.


Really not! This is a huge faceplant for writing things in Rust. If they had been writing their code in Java/Kotlin instead of Rust, this outage either wouldn't have happened at all (a failure to load a new config would have been caught by a defensive exception handler), or would have been resolved in minutes instead of hours.

The most useful thing exceptions give you is not static compile time checking, it's the stack trace, error message, causal chain and ability to catch errors at the right level of abstraction. Rust's panics give you none of that.

Look at the error message Cloudflare's engineers were faced with:

     thread fl2_worker_thread panicked: called Result::unwrap() on an Err value
That's useless, barely better than "segmentation fault". No wonder it took so long to track down what was happening.

A proxy stack written in a managed language with exceptions would have given an error message like this:

    com.cloudflare.proxy.botfeatures.TooManyFeaturesException: 200 > 60
        at com.cloudflare.proxy.botfeatures.FeatureLoader(FeatureLoader.java:123)
        at ...
and so on. It'd have been immediately apparent what went wrong. The bad configs could have been rolled back in minutes instead of hours.

In the past I've been able to diagnose production problems based on stack traces so many times I was been expecting an outage like this ever since the trend away from providing exceptions in new languages in the 2010s. A decade ago I wrote a defense of the feature and I hope we can now have a proper discussion about adding exceptions back to languages that need them (primarily Go and Rust):

https://blog.plan99.net/what-s-wrong-with-exceptions-nothing...


That has nothing to do with exceptions, just the ability to unwind the stack. Rust can certainly give you a backtrace on panics; you don’t even have to write a handler to get it. I would find it hard to believe Cloudflare’s services aren’t configured to do it. I suspect they just didn’t put the entire message in the post.


https://doc.rust-lang.org/std/backtrace/index.html#environme...

tldr: Capturing a backtrace can be a quite expensive runtime operation, so the environment variables allow either forcibly disabling this runtime performance hit or allow selectively enabling it in some programs.

By default it is disabled in release mode.


It's one of the problems with using result types. You don't distinguish between genuinely exceptional events and things that are expected to happen often on hot paths, so the runtime doesn't know how much data to collect.


panic is the exceptional event. It so happens that rust doesn't print a stacktrace in release unless configured to do so.

Similarly, capturing a stack trace in a error type (within a Result for example) is perfectly possible. But this is a choice left to the programmer, because capturing a trace is not cheap.


There's clearly a big gap in how things are done in practice. You wouldn't see anyone call System.exit in a managed language if a data file was bigger than expected. You'd always get an exception.

I used to be an SRE at Google. Back then we also had big outages caused by bad data files pushed to prod. It's a common enough issue so I really sympathize with Cloudflare, it's not nice to be on call for issues like that. But Google's prod environments always generated stack traces for every kind of failure, including CHECK failures (panics) in C++. You could also reflect the stack traces of every thread via HTTP. I used to diagnose bugs in production under time pressure quite regularly using just these tools. You always need detailed diagnostics.

Languages shouldn't have panics, tbh, it's a primitive concept. It so rarely makes sense to handle errors that way. I know there's a whole body of Rust/Go lore claiming panics are fine, but it's not a good move and is one of the reasons I've stayed away from Go over the years and wouldn't use Rust for anything higher than low level embedded components or operating system code that has to export a C ABI. You always want diagnostics and recoverable errors; this kind of micro-optimization doesn't make sense outside of extremely constrained embedded environments that very few of us work in.


A panic in Rust is the same as an exception in C++. You can catch it all the same.

https://doc.rust-lang.org/std/panic/index.html

An uncaught exception in C++ or an uncaught panic in Rust terminates the program. The unwinding is the same mechanism. I think the implementation is what comes with LLVM, but I haven't checked.

I was also a Google SRE, and I liked the stacktrace facilities so much that I got permission to open source a library inspired from it: https://github.com/bombela/backward-cpp (I know I am not doing a great job maintaining it)

At Uber I implemented a similar stackrace introspection for RPC tasks via HTTP for Go services.

You can also catch a Go panic. Which we did in our RPC library at Uber.

It would be great for all of that to somehow come ready made though. A sort of flag "this program is a service, turn on all the good diagnostics, here is my main loop".


OK, so the issue is frameworks not catching panics and logging proper stack traces? Very cool that you made a library.


Alternatively you can look at actually innovative programming languages to peek at the next 20 years of innovation.

I am not sure that watching the trendy forefront successfully reach the 1990s and discuss how unwrapping Option is potentially dangerous really warm my heart. I can’t wait for the complete meltdown when they discover effect systems in 2040.

To be more serious, this kind of incident is yet another reminder that software development remains miles away from proper engineering and even key providers like Cloudfare utterly fail at proper risk management.

Celebrating because there is now one popular language using static analysis for memory safety feels to me like being happy we now teach people to swim before a transatlantic boat crossing while we refuse to actually install life boats.

To me the situation has barely changed. The industry has been refusing to put in place strong reliability practices for decades, keeps significantly under investing in tools mitigating errors outside of a few fields where safety was already taken seriously before software was a thing and keeps hiding behind the excuse that we need to move fast and safety is too complex and costly while regulation remains extremely lenient.

I mean this Cloudfare outage probably cost millions of dollars of damage in aggregate between lost revenue and lost productivity. How much of that will they actually have to pay?


Let's try to make effect systems happen quicker than that.

> I mean this Cloudfare outage probably cost millions of dollars of damage in aggregate between lost revenue and lost productivity. How much of that will they actually have to pay?

Probably nothing, because most paying customers of cloudflare are probably signing away their rights to sue Cloudflare for damages by being down for a while when they purchase Cloudflare's services (maybe some customers have SLAs with monetary values attached, I dunno). I honestly have a hard time suggesting that those customers are individually wrong to do so - Cloudflare isn't down that often, and whatever amount it cost any individual customer by being down today might be more than offset by the DDOS protection they're buying.

Anyway if you want Cloudflare regulated to prevent this, name the specific regulations you want to see. Should it be illegal under US law to use `unwrap` in Rust code? Should it be illegal for any single internet services company to have more than X number of customers? A lot of the internet also breaks when AWS goes down because many people like to use AWS, so maybe they should be included in this regulatory framework too.


> I honestly have a hard time suggesting that those customers are individually wrong to do so - Cloudflare isn't down that often, and whatever amount it cost any individual customer by being down today might be more than offset by the DDOS protection they're buying.

We have collectively agreed to a world where software service providers have no incentive to be reliable as they are shielded from the consequences of their mistakes and somehow we see it as acceptable that software have a ton of issues and defects. The side effect is that research on actually lowering the cost of safety has little return on investment. It doesn't have be so.

> Anyway if you want Cloudflare regulated to prevent this, name the specific regulations you want to see.

I want software provider to be liable for the damage they cause and minimum quality regulation on par with an actual engineering discipline. I have always been astounded that nearly all software licences start with extremely broad limitation of liability provisions and people somehow feel fine with it. Try to extend that to any other product you regularly use in your life and see how that makes you fell.

How to do proper testing, formal methods and resilient design have been known for decades. I would personnaly be more than okay with let's move less fast and stop breaking things.


> I want software provider to be liable for the damage they cause and minimum quality regulation on par with an actual engineering discipline. I have always been astounded that nearly all software licences start with extremely broad limitation of liability provisions and people somehow feel fine with it. Try to extend that to any other product you regularly use in your life and see how that makes you fell.

So do you want to make it illegal to punish GNU GPL licensed software because that license has a warranty disclaimer? Do you want to make it illegal for a company like Cloudflare to use open source licensed software with similar warranty disclaimers, or for the SLA agreements and penalties for violating them that they make with their own paying customers to be legally unenforceable? What if I just have a personal website and I break the javascript on it because I was careless, how should that be legally treated?

I'm not against research into more reliable software or using better engineering techniques that result in more reliable software. What I'm concerned about is the regulatory regime - in other words, what software it is or is not legal to write or sell for money - and how to properly incentivize software service providers to use techniques that result in more reliable software without causing a bunch of bad second order effects.


I absolutely do not mind, yes.

You can't go out in the middle of your city, build a shoddy bridge, say you wave all responsibilities and then wash your hands with the consequences when it predictably breaks. Why can you do that with pieces of software?

Limiting the scope of liability waivers is not the same things as censoring what software can be produced. It's just ensuring that everyone actually take responsibility for the things they distribute.

As I said previously, the current situation doesn't make sense to me. People have been brainwashed in believing that the way software is released currently, half finished and crippled with bugs, is somehow normal and acceptable. It absolutely doesn't have to be this way.

It'a beyond shameful that the average developers today is blissfully unaware of anything related to producing actually secure pieces of software. I am pretty sure I can walk into more than 90% of development shops today and no one there will know what formal methods are. With some luck, they might have some static analysers running, probably from a random provider and be happy with the crappy percentages that it outputs.

It's not about research. It's about a field which entirely refuses to become mature despite being pivotal to the modern economy. And why would it? Software products somehow get a free pass for the shit they push on everyone.

We are in the classical "market for lemons" trap where negative externalities are not priced in and investing in security will just get you to lose against companies that don't care. Every major incidents remind us we need out. The market has already showed it won't self correct. It's a classical case where regulatory intervention is necessary and legitimate.

The shift is already happening by the way. The EU product liability directive was adopted in 2024 and the transition period ends in December 2026. The US "National Cybersecurity Strategy" signals intend to review the status quo. It's coming faster that people realise.


I find myself in the odd position of agreeing with you both.

That we’re even having this discussion is a major step forward. That we’re still having this discussion is a depressing testament to how slow slowly the mainstream has adopted better ideas.


I agree with you. But onsidering nobody learns any real engineering in software; myself solidly included; this is still an improvement.

But yes, I wish I had learned more, and somehow stumbled upon all the good stuff, or be taught at university about at least what Rust achieves today.

I think it has to be noted Rust still allows performance with the safety it provides. So that's something maybe.


> I can’t wait for the complete meltdown when they discover effect systems in 2040

Zig is undergoing this meltdown. Shame it's not memory safe. You can only get so far in developing programming wisdom before Eternal September kicks in and we're back to re-learning all the lessons of history as punishment for the youthful hubris that plagues this profession.


I have been using freecad extensively. Almost daily. It's an absolute utter mess. It barely works. But it's essentially the only open source CAD. So I keep trucking.

The most important improvement is the toponaming heuristic solver spearheaded by Realthunder.

Since that was merged into mainline, it seems that the devs keep breaking the UX and shortcuts without rythme nor reason, while the fundamentals are broken beyond repair.

I would never recommend freecad to anybody, even though this this the only CAD I use, and I actually write python for it for some automation.

I cannot live without freecad. But damn it's a mess.


Another opensource CAD tool to look at is Dune 3D: https://dune3d.org/

which has been discussed here in the past:

https://news.ycombinator.com/item?id=37979758

https://news.ycombinator.com/item?id=40228068

https://news.ycombinator.com/item?id=41975958

which if it just had parameters/scripting would have a lot more potential.


I was somewhat excited about it, but one feature I wanted for the thing I was doing at the time was importing an svg, which it didn't support at the time (and from a cursory github search still doesn't support?).

It's a shame, because it looks really nice. Maybe I'll check it out for the next thing I do where that's not a requirement. Might be a shame since I've finally learnt how to (basically) use freecad now!


Importing an SVG will require support for Bézier curves, which is a tough lift mathematically.


looks like they've got some support for bezier's internally (and they allow exporting SVG's) so I assume the building blocks should be there? I definitely have no idea, and haven't looked into it though.


As an opposing viewpoint, I also use FC extensively for designing moderately complex parts (fully parametrically constrained assemblies, dozens of parts per assembly, mechanical components involving motion).

I've also extended the functionality with python, and have heavily customized the theme and shortcuts to fit my personal taste.

I not only tolerate the software, but enjoy using it, and am quite proficient at it.

I would recommend FreeCAD to others, but with some caveats. The most important being that they need to be willing to tolerate a few hours of introductory material, and second that they are serious about using the software long-term.

Otherwise, I'd probably just recommend Onshape. But, for many others, FC is fully viable.


> Pressing page-down on a text-page or in a text-editor, without animation, it takes me a lot of time and energy to find the place where I left off reading or editing before scrolling.

We used to be able to look at the scroll bar to keep track.

Furthermore page down/up used to move a full page consistently. But today it might as well be a random amount specific to the application or content. Making it impossible to train muscle memory.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: