I think that the fact that no one is fully sure is part of the problem.
The act is intentionally very vague and broad.
Generally, the gist is that it's up to the platforms themselves to assess and identify risks of "harm", implement safety measures, keep records and run audits. The guidance on what that means is very loose, but some examples might mean stringent age verifications, proactive and effective moderation and thorough assessment of all algorithms.
If you were to ever be investigated, it will be up to someone to decide if your measures were good or you have been found lacking.
This means you might need to spend significant time making sure that your platform can't allow "harm" to happen, and maybe you'll need to spend money on lawyers to review your "audits".
The repercussions of being found wanting can be harsh, and so, one has to ask if it's still worth it to risk it all to run that online community?
The full agenda of course is: if we jail someone for the meme, then we get to force the company to remove the meme, and then we get to destroy the company if they do not comply with exacting specifications within exact times. Thus full control of speech, teehee modern technology brings modern loopholes! "shut up peon, you still have full right to go into your front yard and say your meme to the squirrels"
No, not just memes that encourage people to riot on the streets. But you know that, and it's funny that you think that you'd think that meme encouraging that should be enough to land someone in jail anyways. I mean, it fits very well with the servile "king's subject" mentality that your average Brit has but still, always funny to come across.
(And no I'm not american, but the UK is on the opposite end of rationalizing every single restriction and siding with authorities every single time. It's extremely pitiful to see)
> If you were to ever be investigated, it will be up to someone to decide if your measures were good or you have been found lacking.
This is the problem with many European (and I guess also UK) laws.
GDPR is one notable example. Very few people actually comply with it properly. Hidden "disagree" options in cookie pop-ups and unauthorized data transfers to the US are almost everywhere, not to mention the "see personalized ads or pay" business model.
Unlike with most American laws, GDPR investigations happen through a regulator, not a privately-initiated discovery process where the suing party has an incentive to dig up as much dirt as possible, so in effect, you only get punished if you either really go overboard or are a company that the EU dislikes (which is honestly mostly just Meta at this point).
NOYB is a non governmental organisation which initiated many of the investigations against Meta. E.g. they recently filed a complaint against the social media app BeReal for not taking no for an answer and continuesly asking for permission for data collection if you decline.
Exactly the complaint that everyone on here made about GDPR, saying the sky would fall in. If you read UK law like an American lawyer you will find it very scary.
But we don't have political prosecuters out to make a name for themselves, so it works ok for us.
I don't have a full answer but what I observed is that there is always someone that is nearby and "in control".
The base I think works like this:
- you scan the passport
- you look at the camera
- your passport details & photo are sent to the control room with your live feed
- a guy looks at everything those details plus your immigration history and other info
- same guy decides to let you in or send you to a desk
This was particularly obvious in the old Gibraltar border where you could easily be the only person going through and so the agent had to do the routine just to let you pass.
Yeah this is pretty much my assumption. I would imagine that the person doing the judging may not be on site as it's probably cheaper and easier to have a bigger workforce in one office, flex the people in shifts, load balance across all ports with the gates, etc.
I do wonder how much is automated. For example, is the gate fully automated in calculating a risk score and then referring you to a border agent if above a threshold? Is a person looking at the details in realtime to decide that, or are they just doing facial recognition, or are they only involved when the gates fail.
My company(UK) recently tried to force on-call on all engineers.
The initial wording was very restrictive, like 5 minute acknowledgement time and 15 minutes at-laptop. 24/7 for 7 days. They tried to have this implemented without any extra remuneration or perks for the on-call engineer.
On top of it possibly being very illegal, it seems very immoral to spring something like that on people that did on agree to it when they took the job.
I fought for it and I got them to change their policy in 2 mostly meaningful ways:
- It's an opt-in method
- On-call engineers get paid extra for just being on-call and get extra time off whenever they need to actually do something.
This makes sure that you only get people actually willing to do it and there is an incentive. I think it's been quite a successful program!
Luckily I didn't need to get them involved, but in the UK there are unions starting to form for tech workers, I suggest you join one like https://prospect.org.uk/tech-workers
A company I used to work for asked me to do on-call, it wasn't in my contract, I declined, that was that.
I don't understand what "force" means in this context - the conversation went something like "I have commitments outside of work" and that was that. I mean, there was a back and forth, but yeah, at the end of the day I took the job knowing I'd be available for the hours they wanted when I took the job.
Indeed, which is why I think they ended backing out. But even if it could, there are definitely better ways of handling it. The deal we ended up getting is one that benefits all sides and I wish more companies would adopt.
>In a call I was explicitly told "every company does it like this, if that's not ok you might not be a right fit for this company".
In situations like this it's helpful to have a no-management backchannel team chat group set up so you can use it synchronize a series of "nope, not doing that".
I joined Prospect because my company tried to implement an unspoken on-call arrangement, whereby they would try to call me on my mobile 24/7 expecting an immediate response. I asked what the additional renumeration is for that, and they said there isn't any.
Now I'm a Prospect member, and my mobile is always on mute.
I used to work for an MSP. They billed 2-3x the normal rate for on-call to clients. We, however, were simply paid our hourly rate plus overtime. It created a perverse incentive to have as many on-call events as possible as it was very profitable for the company. They billed minimum time to clients, but we were told we could only bill for the exact minutes spent working.
I am in the same position where I emigrated and would not fight for my home country. That said, there are many reasons why someone would.
First of all, not everyone emigrates because of hate for their homeland, many do it for necessity and keep close ties.
Some emigrate to search fortune to one day return.
Some emigrate for love, of people or things.
And even those that in the end emigrate because they didn't like their country might still want to fight, not for the country, but for the people that they love that still live in it.
Quite OT: how do people get adsense (or other advertisers) on apps/games like these? I have a few apps that I would love to try an monetize but whenever I apply for adsense I get rejected for "lack of content".
Don't use adsense for web games. Take this advice from someone who had to deal with them. You're just one sniff away from Google's algorithm locking down your account and impacting your earnings.
Short answer: you don't, Google doesn't like them. If you have a huge amount of players, you may be able to get in contact. TETR.IO uses AdinPlay, which does use the same Google network, but without being immediately thrown out of the network
That's a real shame... I will consider raising the prices but I am really worried about it because it's really just an impulse purchase for a very small thing and I feel like almost no one will be interested if the price is too high.
The act is intentionally very vague and broad.
Generally, the gist is that it's up to the platforms themselves to assess and identify risks of "harm", implement safety measures, keep records and run audits. The guidance on what that means is very loose, but some examples might mean stringent age verifications, proactive and effective moderation and thorough assessment of all algorithms.
If you were to ever be investigated, it will be up to someone to decide if your measures were good or you have been found lacking.
This means you might need to spend significant time making sure that your platform can't allow "harm" to happen, and maybe you'll need to spend money on lawyers to review your "audits".
The repercussions of being found wanting can be harsh, and so, one has to ask if it's still worth it to risk it all to run that online community?