Hacker Newsnew | past | comments | ask | show | jobs | submit | flakes's commentslogin

I find it better to bubblewrap against a full sandbox directory. Using docker, you can export an image to a single tarball archive, flattening all layers. I use a compatible base image for my kernel/distro, and unpack the image archive into a directory.

With the unpack directory, you can now limit the host paths you expose, avoiding leaking in details from your host machine into the sandbox.

bwrap --ro-bind image/ / --bind src/ /src ...

Any tools you need in the container are installed in the image you unpack.

Some more tips: Use --unshare-all if you can. Make sure to add --proc and --dev options for a functional container. If you just need network, use both --unshare-all and --share-net together, keeping everything else separate. Make sure to drop any privileges with --cap-drop ALL


Maybe it's a mix of me using the site less, or questions I previously answered not being as relevant anymore, however as it stands, it's just not fun to visit the site any more.

I have about ~750 answers and 24K rep after almost 12 years of being a member. The site was a great way to spend some free cycles and help people. My favorite bounty answer lead to me finding a bug in the Java compiler! I even got recruited into my current role from the old Stack Overflow Jobs board.

With AI, not only did the quality and frequency of posts go down, but the activity on my existing posts are basically zero now. I used to have a few notifications a week with either comments on my past answers/questions or a few upvotes (for those fun little serotonin boosts). Looking at my past stats.. in 2023 I had ~170 notifications, in 2024 that dropped to ~100, and in 2025 it went down to ~50 (with only 5 notifications since September).

I don't feel engaged with the community, and even finding new questions to answer is a struggle now with (the unanswerable) "open-ended questions" being mixed into the normal questions feed.


> Maybe once every four years on leap day or something.

Advantage: You no longer need to fix that leap day bug on your website.


would it be better to start the intentional shutdown at say a couple of minutes before midnight so you know the shutdown wasn't perhaps caused by the leap day bug?


I do the same, but I skip rsync for git.

    git clone $uri dotfiles; export HOME=$(pwd)/dotfiles 
These days, my laptop acts as a dumb SSH gateway for Linux VMs. No configuration or setup, aside from VS code connecting to VMs. Any server that I would want to load my dotfiles onto will almost always have git installed.

Rant (not directed at any comment here): If it's a production server without git, then please do not run scripts like this. Do not create junk directories on (or ideally any modifications to) secure machines. It inevitably causes new and uninteresting puzzles for your colleagues. Create documented workflows for incident responses or inspection.


Auto has really made c++ unapproachable to me. It's hard enough to reason about anything templated, and now I frequently see code where every method returns auto. How is any one supposed to do a code review without loading the patch into their IDE?


I'd suppose this really depends on how you are developing your codebase but most code should probably be using a trailing return type or using an auto (or template) return type with a concept/requires constraint on the return type.

For any seriously templated or metaprogrammed code nowadays a concept/requires is going to make it a lot more obvious what your code is actually doing and give you actually useful errors in the event someone is misusing your code.


I don't understand why anyone would use auto and a trailing return type for their functions. The syntax is annoying and breaks too much precedent.


Generally, you don't. I'm not sure why the parent suggested you should normally do this. However, there are occasional specific situations in which it's helpful, and that's when you use it.


It just is an improvement for a bunch of reasons.

1. Consistency across the board (places where it's required for metaprogramming, lambdas, etc). And as a nicety it forces function/method names to be aligned instead of having variable character counts for the return type before the names. IMHO it makes skimming code easier.

2. It's required for certain metaprogramming situations and it makes other situations an order of magnitude nicer. Nowadays you can just say `auto foo()` but if you can constrain the type either in that trailing return or in a requires clause, it makes reading code a lot easier.

3. The big one for everyday users is that trailing return type includes a lot of extra name resolution in the scope. So for example if the function is a member function/method, the class scope is automatically included so that you can just write `auto Foo::Bar() -> Baz {}` instead of `Foo::Baz Foo::Bar() {}`.


1. You're simply not going to achieve consistency across the board, because even if you dictate this by fiat, your dependencies won't be like this. The issue of the function name being hard to spot is easier to fix with tooling (just tell your editor to color them or make them bold or something). OTOH, it's not so nice to be unable to tell at a glance if the function return type is deduced or not, or what it even is in the first place.

2. It's incredibly rare for it to be required. It's not like 10% of the time, it's more like < 0.1% of the time. Just look at how many functions are in your code and how many of them actually can't be written without a trailing return type. You don't change habits to fit the tiny minority of your code.

3. This is probably the best reason to use it and the most subjective, but still not a particularly compelling argument for doing this everywhere, given how much it diverges from existing practice. And the downside is the scope also includes function parameters, which means people will refer to parameters in the return type much more than warranted, which is decidedly not always a good thing.


1) consistency, 2) scoping is different and can make it a significant difference.

I have been programming in C++ for 25 years, so I'm so used to the original syntax that I don't default to auto ... ->, but I will definitely use it when it helps simplify some complex signatures.


Consistency (lambdas, etc.)


it makes function declarations/instantiations much more grep-able.


Same in Java and Kotlin, but then again, the majority of people who write code wouldn't understand what I mean when I say "whether a collection is returned as List or Set is an important part of the API 'contract'".


The pain of lisp without any of the joy.


A selfie at that resolution would be some sort of super-resolution microscopy.


> unless I'm missing something I don't think a QUERY verb will change all that much here?

The semantics are important. GET APIs are expected to be safe, idempotent, and cache-friendly. When you are unable to use GET for technical reasons and move to POST, suddenly none of the infrastructure (like routers, gateways, or generic http libs) can make these assumptions about your API. For example, many tools will not attempt to put retry logic around POST calls, because they cannot be sure that retrying is safe.

Having the QUERY verb allows us to overcome the technical limitations of GET without having to drop the safety expectations.


I like the safety aspect of QUERY. Having CDNs cache based off the semantics of the content might be a hard ask. I wonder if this might lead to a standards based query language being designed and a push for CDNs to support it. Otherwise you probably need to implement your own edge processing of the request and cache handling for any content type you care to handle.


Yes, I understand that. I'm just commenting on the "bookmarkable" aspect here, obviously.


You can just use body with GET. QUERY is redundant.


You can, and that is mentioned in RFC 9110... along with the cons for doing so.

> Although request message framing is independent of the method used, content received in a GET request has no generally defined semantics, cannot alter the meaning or target of the request, and might lead some implementations to reject the request and close the connection because of its potential as a request smuggling attack (Section 11.2 of [HTTP/1.1]). A client SHOULD NOT generate content in a GET request unless it is made directly to an origin server that has previously indicated, in or out of band, that such a request has a purpose and will be adequately supported. An origin server SHOULD NOT rely on private agreements to receive content, since participants in HTTP communication are often unaware of intermediaries along the request chain.

QUERY is a new option to help avoid some of those downsides.

https://www.rfc-editor.org/rfc/rfc9110.html#section-9.3.1


Just release change to this RFC and adjust GET body semantics. Much easier than introducing a whole new verb and already supported in a lot of software.


I had no idea there was a hotkey for that. Thank you!


Even better, doing so allows GitHub to insert a source snippet if you paste a link like that into an issue or comment.


Yeah, I just always used the context windows to set permalink. Saves me a step now!


My pleasure!


Another great post from Arse Technica


What does “4x the peak GPU compute performance” mean here? No latency difference, but higher throughput? The footnote was not at all helpful

> Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: