Neuroplasticity. Seems better than the damage caused to your lungs and cells from smoking.
I mean, do you have any evidence that the brain is irreversibly damaged by social media? I have not seen any, but I have seen evidence that there is permanent cell damage from smoking.
To play devil's advocate, there are good studies linking social media use to depression.
While you can somewhat mitigate the negative health effects of smoking by stopping and then making healthy decisions like doing sports and paying attention to what you eat, depression isn't something you can just stop having.
But are you saying that social media causes irreversible and permanent depression that neuroplasticity cannot ever reverse?
There is also a healthy side to social media, but not really a healthy side to smoking.
Social media helps me make and keep in touch with friends. I have not found any negatives personally. My feeds are pretty much just posts from friends. I have removed everything else by now.
This who conversation seems a bit simplistic and reductionist.
Sure the brain grows and changes but just pointing to 'neuroplasticity' -- a concept none of us really understand and saying 'it's all good' -- isn't that insightful because it's too one dimensional. At the end of the day we can say that this must have some permanent effect on the brain because people remember their time on social media, right? Yes, it's a mixed bag with some positives from social media but at the end of the day there's an opportunity cost for the time that they spent on social media in the form of times shared with loved ones, the formation of positive relationships in the real world, and perhaps career opportunities.
With that said the bigger issue to keep in mind is that the people who push this kind of technology on society do so knowing that it has negative consequences for individual users and society as a whole and yet they push it anyways for personal profit. And more than just pushing it they actively lobby the government to change laws or prevent regulations from being enacted that would stop them from doing so.
This is odious behaviour and it should be stopped and the people involved should face personal consequences for damaging society so casually.
I think parent commenter meant that what's insane is that js runtime is not treated as an utility which should never be monetized. It's as if GCC developers haven't figured out how to monetize, but they are willing to at some point.
GOG has a strong anti-DRM stance, but unfortunately not all of the games GOG sells are truly DRM-free if you consider things like online services and online service requirements and live patching/live service. Often considered the worst offender is Sony published games with some of the worst root kit anti-cheat installs still bundled in the GOG edition, with mandatory online "data collection" for the game to run, even for single player games.
GOG will still give you an offline capable installer file for that game, and hasn't entirely compromised its values on that aspect of DRM-free, but the game won't boot up offline and/or without agreeing to the data collection terms and installing the rootkit.
I like GOG and the criticisms here are only because I'd love to see GOG do better, but I also know GOG alone can't fight "the cloud" and even single player games from major publishers having "required" online services. It's a DRM of a different sort (and remains a long term archival issue, because few of the companies like Sony will ever unlock the game or open source the service at the end of the games' commercial lives and would seem to prefer to just leave those games unplayable).
Wine wrapped installers for ... which distro? They ship a shell script that extracts the linux game binaries to user's home dir. Works on all linuxes.
GOG ships what's available. If game devs never made any linux binaries, then there won't be any linux binaries. What? You expected GOG to make a linux port of the game?
Games with wine don't require any special installers. Just open the wine desktop and install the windows game from there, like any other windows program you use in Linux. If you think that's too hard, then get a PS/Xbox and see my original reply, the one with the "we're doomed".
BTW, you can set up your linux to directly execute Windows binaries using binfmt_misc, but that may also be too hard for some...
I don't see why that should matter. It's games, you'd practically have to ship your own libraries anyway.
>If game devs never made any linux binaries, then there won't be any linux binaries. What? You expected GOG to make a linux port of the game?
Personally I couldn't give less of a shit, I'm an adult and have better things to do than play videogames.
I certainly do think it's not an unreasonable wish, and it wouldn't even be particularly hard. If GOG wanted to, they could provide pre-configured wine-wrapped installers for games that just work.
I do not know whether or not this would make financial sense for them, but Valve seems to think so, and I suspect GOG could do with a few cheap European software engineers wrapping games for them. Hell, they could even cut costs further by just open-sourcing their wrappers and largely relying on user-submitted patches for maintenance.
>Games with wine don't require any special installers. Just open the wine desktop and install the windows game from there, like any other windows program you use in Linux.
If you'd ever used Wine you'd know how fiddly it is, there'd obviously be a lot of value in having someone else handle that fiddling for you.
> If you think that's too hard, then get a PS/Xbox and see my original reply, the one with the "we're doomed".
I don't know if GOG shares your poor attitude, but that certainly wouldn't be a good way to run a business. Try coming out of the basement every now and then.
The question for grown-ups with things to do in their lives is usually not whether or not something is too hard, but whether or not it is worth spending their time on. If I ever wanted to play a game, looking up some workaround for a wine-related crash is the last thing I'd want to spend my time on.
I think "UK citizen" should have been replaced by "person acting from within the UK". This is how it is defined in the context of GDPR - the nationality doesn't matter, what matters is where you are when you are provided services.
Let me play devil's advocate: for some reason, functions such as strcpy in glibc have multiple runtime implementations and are selected by the dynamic linker at load time.
And there's a performance cost to that. If there was only one implementation of strcpy and it was the version that happens to be picked on my particular computer, and that implementation was in a header so that it could be inlined by my compiler, my programs would execute faster. The downside would be that my compiled program would only work on CPUs with the relevant instructions.
You could also have only one implementation of strcpy and use no exotic instructions. That would also be faster for small inputs, for the same reasons.
Having multiple implementations of strcpy selected at runtime optimizes for a combination of binary portability between different CPUs and for performance on long input, at the cost of performance for short inputs. Maybe this makes sense for strcpy, but it doesn't make sense for all functions.
You can't really state this with any degree of certainty when talking about whole-program optimization and function inlining. Even with LTO today you're talking 2-3% overall improvement in execution time, without getting into the tradeoffs.
Typically, making it possible for the compiler to decide whether or not to inline a function is going to make code faster compared to disallowing inlining. Especially for functions like strcpy which have a fairly small function body and therefore may be good inlining targets. You're right that there could be cases where the inliner gets it wrong. Or even cases where the inliner got it right but inlining ended up shifting around some other parts of the executable which happened to cause a slow-down. But inliners are good enough that, in aggregate, they will increase performance rather than hurt it.
> Even with LTO today you're talking 2-3% overall improvement in execution time
Is this comparing inlining vs no inlining or LTO vs no LTO?
In any case, I didn't mean to imply that the difference is large. We're literally talking about a couple clock cycles at most per call to strcpy.
What I was trying to point out is that you're essentially talking about LTO. Getting into the weeds, the compiler _can't_ optimize strcpy(*) in practice because its not going to be defined in a header-only library, it's going to be in a different translation unit that gets either dynamically or statically linked. The only way to optimize the function call is with LTO - and in practice, LTO only accounts for 2-3% of performance improvements.
And at runtime, there is no meaningful difference between strcpy being linked at runtime or ahead of time. libc symbols get loaded first by the loader and after relocation the instruction sequence is identical to the statically linked binary. There is a tiny difference in startup time but it's negligible.
Essentially the C compilation and linkage model makes it impossible for functions like strcpy to be optimized beyond the point of a function call. The compiler often has exceptions for hot stdlib functions (like memcpy, strcpy, and friends) where it will emit an optimized sequence for the target but this is the exception that proves the rule. In practice, the benefits of statically linking in dependencies (like you're talking about) does not have a meaningful performance benefit in my experience.
(*) strcpy is weird, like many libc functions its accessible via __builtin_strcpy in gcc which may (but probably won't) emit a different sequence of instructions than the call to libc. I say "probably" because there are semantics undefined by the C standard that the compiler cannot reason about but the linker must support, like preloads and injection. In these cases symbols cannot be inlined, because it would break the ability of someone to inject a replacement for the symbol at runtime.
> What I was trying to point out is that you're essentially talking about LTO. Getting into the weeds, the compiler _can't_ optimize strcpy(*) in practice because its not going to be defined in a header-only library, it's going to be in a different translation unit that gets either dynamically or statically linked.
Repeating the part of my post that you took issue with:
> If there was only one implementation of strcpy and it was the version that happens to be picked on my particular computer, and that implementation was in a header so that it could be inlined by my compiler, my programs would execute faster.
So no, I'm not talking about LTO. I'm talking about a hypothetical alternate reality where strcpy is in a glibc header so that the compiler can inline it.
There are reasons why strcpy can't be in a header, and the primary technical one is that glibc wants the linker to pick between many different implementations of strcpy based on processor capabilities. I'm discussing the loss of inlining as a cost of having many different implementations picked at dynamic link time.
Linux kernel has an interesting optimization using the ALTERNATIVE macro, where you can directly specify one of two instructions and it will be patched at runtime depending on cpu flags. No function calls needed (although you can have a function call as one of the instructions). It's a bit more messy in userspace where you have to respect platform page flags, etc. but it should be possible.
reply