Hacker Newsnew | past | comments | ask | show | jobs | submit | mandarax8's commentslogin

The entire point of the article is that you cannot throw from a destructor. Now how do you signal that closing/writing the file in the destructor failed?


You are allowed to throw from a destructor as long as there's not already an active exception unwinding the stack. In my experience this is a total non-issue for any real-world scenario. Propagating errors from the happy path matters more than situations where you're already dealing with a live exception.

For example: you can't write to a file because of an I/O error, and when throwing that exception you find that you can't close the file either. What are you going to do about that other than possibly log the issue in the destructor? Wait and try again until it can be closed?

If you really must force Java semantics into it with chains of exception causes (as if anybody handled those gracefully, ever) then you can. Get the current exception and store a reference to the new one inside the first one. But I would much rather use exceptions as little as possible.


Just panic. What's the caller realistically going to do with that information?


> The entire point of the article is that you cannot throw from a destructor.

You need to read the article again because your assertion is patently false. You can throw and handle exceptions in destructors. What you cannot do is not catch those exceptions, because as per the standard uncaught exceptions will lead the application to be immediately terminated.


You can throw in a destructor but not from one, as the quoted text rightly notes.


So inside a destructor throw has a radically different behaviour that makes it useless for communicating non-fatal errors


> So inside a destructor throw has a radically different behaviour that makes it useless for communicating non-fatal errors

It's weird how you tried to frame a core design feature of the most successful programming language in the history of mankind as "useless".

Perhaps the explanation lies in how you tried to claim that exceptions had any place in "communicating non-fatal errors", not to mention that your scenario, handling non-fatal errors when destroying a resource, is fundamentally meaningless.

Perhaps you should take a step back and think whether it makes sense to extrapolate your mental models to languages you're not familiar with.


That tastes like leftover casserole instead of pizza.


Maybe we can even find some correlation in the bit pattern of the input and the Boolean table!


Perhaps, but I fear you’re veering way too much into “clever” territory. Remember, this code has to be understandable to the junior members of the team! If you’re not careful you’ll end up with arcane operators, strange magic numbers, and a general unreadable mess.


The pixel position has to be known, how else are you rasterizing something?


The view transform doesn't necessarily have to be known to the fragment shader, though. That's usually in the realm of the geometry shader, but even the geometry shader doesn't have to know how things correspond to screen coordinates, for example if your API of choice represents coordinates as floats from [0.5, 0.5) and all you feed it is vertex positions. (I experienced that with wgpu-rs) You can rasterize things perfectly fine with just vertex positions; in fact you can even hardcode vertex positions into the geometry shader and not have to input any coordinates at all.


Rasterizing and shading are two separate stages. You don’t need to know pixel position when shading. You can wire up the pixel coordinates, if you want, and they are often nearby, but it’s not necessary. This gets even more clear when you do deferred shading - storing what you need in a G-buffer, and running the shaders later, long after all rasterization is complete.


Technically, the (pixel) fragment shader stage happens after the rasterization stage.


Their current OpenGL 4.1 actually does run on top of metal making it even more blatantly obvious that they just don't want to.


I'm not sure exactly what you mean, but you can both output line primitives directly from the mesh shader or output mitered/capped extruded lines via triangles.

As far as other platforms, there's VK_EXT_line_rasterization which is a port of opengl line drawing functionality to vulkan.


That said MSVC,GCC and clang all implement it to allocate an exact value.


Can you point me to some good middleware then? I haven't been able to find any.


GDC Vault programming track has plenty of examples.


Any kind of relative/offset pointers require negative pointer arithmetic. https://www.gingerbill.org/article/2020/05/17/relative-point...


I don't think you can make such a broad statement and be correct in all cases. Negative pointer arithmetic is not by itself a reason to use signed types, except if you are:

1. Certain your added value is negative.

2. Checking for underflows after computation, which you shouldn't.

The article was interesting.


You can still statically link all your own code but dynamically link libc/other system dependencies.


Not with rust…


I wonder what happens in the minds of people who just flatly contradict reality. Are they expecting others to go "OK, I guess you must be correct and the universe is wrong"? Are they just trying to devalue the entire concept of truth?

[In case anybody is confused by your utterance, yes of course this works in Rust]


Can you run ldd on any binary you currently have on your machine that is written in rust?

I eagerly await the results!


I mean, sure, but what's your point?

Here's nu, a shell in Rust:

    $ ldd ~/.cargo/bin/nu
        linux-vdso.so.1 (0x00007f473ba46000)
        libssl.so.3 => /lib/x86_64-linux-gnu/libssl.so.3 (0x00007f47398f2000)
        libcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3 (0x00007f4739200000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f473b9cd000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4739110000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4738f1a000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f473ba48000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f473b9ab000)
        libzstd.so.1 => /lib/x86_64-linux-gnu/libzstd.so.1 (0x00007f4738e50000)
And here's the Debian variant of ash, a shell in C:

    $ ldd /bin/sh     
        linux-vdso.so.1 (0x00007f88ae6b0000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f88ae44b000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f88ae6b2000)


Well seems I was wrong about linking C libraries from rust.

The problem of increased RAM requirements and constant rebuilds are still very real, if only slightly less big because of dynamically linking C.


That would have been a good post if you'd stopped at the first paragraph.

Your second paragraph is either a meaningless observation on the difference between static and dynamic linking or also incorrect. Not sure what your intent was.


Why do facts offend you?


I’m genuinely curious now, what made you so convinced that it would be completely statically linked?


I think people often talk about Rust only supporting static linking so he probably inferred that it couldn't dynamically link with anything.

Also Go does produce fully static binaries on Linux and so it's at least reasonable to incorrectly guess that Rust does the same.

Definitely shouldn't be so confident though!


Go may or may not do that on Linux depending what you import. If you call things from `os/user` for example, you'll get a dynamically linked binary unless you build with `-tags osusergo`. A similar case exists for `net`.


go by default links libc


It doesn't. See the sibling comment.


std::bind is bad for him for the same reasons std::function is bad though


Why? If the bound (member) function crashes, you should get a perfectly useable crash report. AFAIU his problem was that lambdas are anonymous function objects. This is not the case here, because the actual code resides in a regular (member) function.


Does a stack trace from a crash in a bound function show the line number of where the bind() took place?


Assuming the stack trace is generated by walking up the stack at the time when the crash happened, nothing that works like a C function pointer would ever do that. Assigning a a pointer to a memory location doesn't generate a stack frame, so there's no residual left in the stack that could be walked back.

A simple example. If you were to bind a function pointer in one stack frame, and the immediately return it to the parent stack frame which then invokes that bound pointer, the stack that bound the now called function would literally not exist anymore.


No, but neither does the author's solution.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: