Hacker Newsnew | past | comments | ask | show | jobs | submit | more locknitpicker's commentslogin

> The entire point of the article is that you cannot throw from a destructor.

You need to read the article again because your assertion is patently false. You can throw and handle exceptions in destructors. What you cannot do is not catch those exceptions, because as per the standard uncaught exceptions will lead the application to be immediately terminated.


You can throw in a destructor but not from one, as the quoted text rightly notes.


So inside a destructor throw has a radically different behaviour that makes it useless for communicating non-fatal errors


> So inside a destructor throw has a radically different behaviour that makes it useless for communicating non-fatal errors

It's weird how you tried to frame a core design feature of the most successful programming language in the history of mankind as "useless".

Perhaps the explanation lies in how you tried to claim that exceptions had any place in "communicating non-fatal errors", not to mention that your scenario, handling non-fatal errors when destroying a resource, is fundamentally meaningless.

Perhaps you should take a step back and think whether it makes sense to extrapolate your mental models to languages you're not familiar with.


> So some kids would complain that C++ destructors RAII philosophy require creating a whole "class X{public:~X()}" which is sometimes inconvenient so it doesn't exactly equal "finally".

Those figurative kids would be stuck in a mental model where they try to shoehorn their ${LanguageA} idioms onto applications written in ${LanguageB}. As the article says, C++ has destructors since the "C with Classes" days. Complaining that you might need to write a class is specious reasoning because if you have a resource worth managing, you already use RAII to manage it. And RAII is one of the most fundamental and defining features of C++.

It all boils down to whether one knows what they are doing, or even bothers to know what they are doing.


Ok, but sometimes you just need a single line in a finally and writing a class is more annoying


> Ok, but sometimes you just need a single line in a finally and writing a class is more annoying

I don't think you understand.

If you need to run cleanup code whenever you need to destroy a resource, there is already a special member function designed to handle that: the destructor. Read up on RAII.

It somehow you failed to understand RAII and basic resource management, you can still use one-liners. Read up on scope guard.

If you are too lazy to learn about RAII and too lazy to implement a basic scope guard, you can use one of the many scope guard implementations around. Even Boost has those.

https://www.boost.org/doc/libs/latest/libs/scope/doc/html/sc...

So, unless you are lazy and want to keep mindlessly writing Java in ${LANGUAGE} regardless it makes sense or not, there is absolutely no reason at all to use finally in C++.


Slightly more than that: If you need to run cleanup code, whatever needs cleaned up should be a class and do the cleanup in the destructor.

Take a file handle, for instance. Don't use open() or fopen() and then try to close it in a finally. Instead, use a file class and let it close itself by going out of scope.


> It's mostly built atop problem shifting. For example, Seattle fought to send their compost and build wind farms in eastern Washington - where it was in someone else's backyard.

This is a silly opinion to have. It's like complaining that reinforcing police presence in an area is problem shifting because you'll still have crime taking place somewhere else. It's an attempt to frame any action as a false dilemma that forces an all-or-nothing logic based on specious reasoning.


I feel this is a poor article whose main premise is patently false, and misrepresents the nature of a very crude mistake in cloud engineering: rolling out breaking changes.

As the blogger failed to identify and understand the root cause of a problem,the proposed solution also makes no sense.

The underlying issue has nothing to do with versioning cache. It has everything to do with failing to understand what a breaking change is, that pushing a breaking change to a contract does create problems, and that lack of any effective testing process will allow crude mistakes to slip into production.

To be blunt, versioning cache would not solve the problem. The blogger already stated that they failed to understand they were pushing a breaking change to the contract. If you are not changing the contract, you are not going to go through the trouble of versioning your cache too. Therefore the failure mode is not addressed and the problem is still present.

From the description, the problems were noticed when new instances failed to deserialize data saved by old instances. This spells an entirely different failure mode that the blogger failed even to understand: why is the system retaining cached data that caused the system to throw errors? If those entries were purged then the failure would be mitigated and only transient, proportional to the rollout rate. Purging the whole cache would also completely fix the issue after the full rollout. Moreover, if the cache wasn't purshed, rolling back changes wouldn't get the system back in a consistent state.


> Are you saying that you can write tests at the same speed as AI can?

I feel this is a gross mischaracterization of any user flow involving using LLMs to generate code.

The hard part of generating code with LLMs is not how fast the code is generated. The hard part is verifying it actually does what it is expected to do. Unit tests too.

LLMs excel at spewing test cases, but you need to review each and every single test case to verify it does anything meaningful or valid and you need to iterate over tests to provide feedback on whether they are even green or what is the code coverage. That is the part that consumes time.

Claiming that LLMs are faster at generating code than you is like claiming that copy-and-pasting code out of Stack Overflow is faster than you writing it. Perhaps, but how can you tell if the code actually works?


> This means that it also frequently shows you videos outside of your "bubble" as a test to see if you're also interested in other topics.

Namely far-right, xenophobic content, mixed with subversive propaganda pushed by state actors.

https://en.wikipedia.org/wiki/Alt-right_pipeline


I have not seen any of this and I am well aware of it existing, but I imagine that will change once Oracle takes over.


> A framework is some kind of application scaffolding that normally calls you.

This. A framework relies extensively on inversion of control. It provides the overall software architecture of an application, and developers just inject the components called by the app to customize some aspects.


> Not OP, but the case can be made that it's still the same very ugly language of 10 years ago, with few layers of sugar coating on top.

Let's talk specifics. As it seems you have strong opinions, in your opinion what is the single worst aspect of JavaScript that justifies the use of the word "ugly"?


https://dorey.github.io/JavaScript-Equality-Table/

https://www.reddit.com/r/learnjavascript/comments/qdmzio/dif...

or anything that touches array ops (concatenating, map, etc…). I mean, better and more knowledgeable people than me have written thousands of articles about those footguns and many more.

I am not a webdev, I don't want to remember those things, but more often than I would wish, I have to interop with JS, and then I'd rather use a better behaved language that compiles down to JS (there are many very good ones, nowadays) than deal with JS directly, and pray for the best.


Both of the things you quoted are basically gone in practice, you just always use const/let and always use triple-equals for equality comparisons and that's that. Most people that write JavaScript regularly will lint these out in the first place.

OTOH I think JS has great ergonomics especially wrt closures which a number of popular languages get wrong. Arrow functions provide a syntactically pleasant way to write lambdas, let/const having per iteration binding in loops to avoid nasty surprises when capturing variables, and a good number of standard methods that exploit them (eg map/filter on arrays). I also think, though a lot of people would disagree because of function coloring, that built-in async is a great boon for a scripting languages, you can do long operations like IO without having to worry about threading or locking up a thread, so you get to work with a single threaded mental model with a good few sharp edges removed.


If type conversion and the new var declaration keywords are your top complains about a language, I'm sorry to say that you are at best grasping at straws to find some semblance of justification for you irrational dislike.

> I am not a webdev, I don't want to remember those things, (...)

Not only is JavaScript way more than a webdev thing, you are ignoring the fact that most of the mainstream programming languages also support things like automatic type conversion.


> you are at best grasping at straws to find some semblance of justification for you irrational dislike.

You seem so emotionally-involved that the whole point whooshed above your head. JS is a language that gives me no joy to use (there are many of those, I can put Fortran or SQL in there), and, remarkably, gives me no confidence that whatever I write with it does what I intend (down to basic branching with checking for nulliness/undefinedness, checking for edge-cases, etc). In that sense it's much worse than most of those languages that I just dislike.

> Not only is JavaScript way more than a webdev thing, you are ignoring the fact that most of the mainstream programming languages also support things like automatic type conversion.

Again, you are missing the point. JS simply has no alternative for webdev, but it's easy to argue that, for everything else, there are better, faster, more expressive, more robust, … languages out there. The only time I ever have to touch JS is consequently for webdev.


> Since hundreds of people were involved the most likely explanation is incompetence

Hundreds of people might be involved, but the only key factor required for a single point of failure to propagate to the deliverable is lack of verification.

And God knows how the Trump administration is packed with inexperiente incompetents assigned to positions where they are way way over their head, and routinely commit the most basic mistakes.


> Tyson is closing this meat packing facility because cattle herd sizes have shrunk. Why have they shrunk? Climate change.

That is not what your source states.

Here's a direct quote from the source:

> Many factors including drought and cattle prices have contributed to that decline. And now the emergence of a pesky parasite in Mexico and the prospect of widespread tariffs may further reduce supply and raise prices.

Source: https://apnews.com/article/beef-prices-record-high-cattle-st...

The word "climate" is not mentioned in either articles.

What the sources say is that beef prices are soaring, and ranchers have an incentive to sell off their cattle now to capture those soaring profits instead of holding onto it for breeding.

Also from the article:

> Nelson said that recently the drought has eased — allowing pasture conditions to improve — and grain prices are down thanks to the drop in export demand for corn because of the tariffs. Those factors, combined with the high cattle prices might persuade more ranchers to keep their cows and breed them to expand the size of their herds.


> ranchers have an incentive to sell off their cattle now to capture those soaring profits

This part seems like the main part of the article. They're looking at huge sales returns and relatively low feed costs that just dropped a bunch (since 2022-2023 highs) and want to rake in the money.

Pretty much that economist quote "'Do I sell that animal now and take this record high check?' 'do I keep her to realize her returns over her productive life when she’s having calves?’ 'so far the side that’s been winning is to sell her and get the check.'"

Personally, wonder whether they're actually going to bother with more herd. The low prices possibly incentivize selling high cost cows at low feed prices and raking in as much as they can before the prices go down.

And it all seems nonsensical, since cattle herds have been going down in size pretty much continuously since 2000 [1] while prices have been going up almost continuously except for 2015-2020 [2] (which is notably when herd sizes actually went UP). So when prices go down, herd sizes go up. People don't try have have more cows to make more money, they just sell the cows now.

[1] https://www.nass.usda.gov/Newsroom/2025/07-25-2025.php

[2] https://finance.yahoo.com/quote/LE%3DF/

Admittedly, the food thing generally is kind of BS anyways about the drought and otherwise. The drought was probably an issue (prices went up during 2022-2023), yet Hay is still pretty much on the same linear trend it's been on since 1970. [3] Alfalfa Hay also went up locally in 2022-2023, yet dropped back to normal trends relatively quickly after (basically almost the same linear up since 1970). [4] It matters, it just doesn't seem to matter that much.

[3] https://fred.stlouisfed.org/series/WPU0181

[4] https://fred.stlouisfed.org/series/WPU01810101

Cattle sales prices keep rising, farmers keep selling, rather than betting on long-term elevated prices and large herd sales.

EDIT: Consideration after, think it all seems nonsense, cause if this was gambling, then usually the response would be to double down, add more money, or similar if a player was raking in money at the table. If the table turned against them, and they got poor returns, then usually they'd walk away, or find a different table. Instead, they're "winning" and getting huge sales prices, so they ... have less cows?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: