So, for 10 pairs, 45 guesses (9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1) in the worst case, and roughly half that on average?
It's interesting how close 22.5 is to the 21.8 bits of entropy for 10!, and that has me wondering how often you would win if you followed this strategy with 18 truth booths followed by one match up (to maintain the same total number of queries).
Simulation suggests about 24% chance of winning with that strategy, with 100k samples. (I simplified each run to "shuffle [0..n), find index of 0".)
It should be easier to understand the optimal truth booth strategy. Since this is a yes/no type of question, the maximum entropy is 1 bit, as noted by yourself and others. As such, you want to pick a pair where the odds are as close to 50/50 as possible.
> Employing that approach alone performed worse than the contestants did in real life, so didn't think it was worth mentioning!
Yeah, this alone should not be sufficient. At the extreme of getting a score of 0, you also need the constraint that you're not repeating known-bad pairs. The same applies for pairs ruled out (or in!) from truth booths.
Further, if your score goes down, you need to use that as a signal that one (or more) of the pairs you swapped out was actually correct, and you need to cycle those back in.
I don't know what a human approximation of the entropy-minimization approach looks like in full. Good luck!
«As such, you want to pick a pair where the odds are as close to 50/50 as possible.»
This is incorrect, the correct strategy is mostly to check the most probable match (the exception being if the people in that match has less possible pairings remaining than the next most probable match).
The value of confirming a match, and thus eliminate all other pairings involving those two from the search space, is much higher than a 50/50 chance of getting a no match and only excluding that single pairing.
> This is incorrect, the correct strategy is mostly to check the most probable match (the exception being if the people in that match has less possible pairings remaining than the next most probable match).
Do you have any hard evidence, or just basing this on vibes? Because your proposed strategy is emphatically not how you maximize information gain.
Scaling up the problem to larger sizes, is it worth explicitly spending an action to confirm a match that has 99% probability? Is it worth it to (most likely) eliminate 1% of the space of outcomes (by probability)? Or would you rather halve your space?
This isn't purely hypothetical, either. The match-ups skew your probabilities such that your individual outcomes cease to be equally probable, so just looking at raw cardinalities is insufficient.
If you have a single match out of 10 pairings, and you've ruled out 8 of them directly, then if you target one of the two remaining pairs, you nominally have a 50/50 chance of getting a match (or no match!).
Meanwhile, you could have another match-up where you got 6 out of 10 pairings, and you've ruled out 2 of them (thus you have 8 remaining pairs to check, 6 of which are definitely matches). Do you spend your truth booth on the 50/50 shot (which actually will always reveal a match), or the 75/25 shot?
(I can construct examples where you have a 50/50 shot but without the guarantee on whether you reveal a match. Your information gain will still be the same.)
It's way more lopsided than your example would suggest.
My understanding is that Netflix can stream 100 Gbps from a 100W server footprint (slide 17 of [0]). Even if you assume every stream is 4k and uses 25 Mbps, that's still thousands of streams. I would guess that the bulk of the power consumption from streaming video is probably from the end-user devices -- a backbone router might consume a couple of kilowatts of power, but it's also moving terabits of traffic.
IETF consensus does not require that all participants agree although
this is, of course, preferred. In general, the dominant view of the
working group shall prevail. (However, it must be noted that
"dominance" is not to be determined on the basis of volume or
persistence, but rather a more general sense of agreement.) Consensus
can be determined by a show of hands, humming, or any other means on
which the WG agrees (by rough consensus, of course). Note that 51%
of the working group does not qualify as "rough consensus" and 99% is
better than rough. It is up to the Chair to determine if rough
consensus has been reached.
The goal has never been 100%, but it is not enough to merely have a majority opinion.
And to add to that, the blurb you link notes explicitly that for IETF purposes, "rough consensus" is reached when the Chair determines is has been reached.
Yes, but WG chairs are supposed to help. One way to help would have been to do a consensus call on the underlying controversy. Still, I think the chair is in the clear as far as the rules go.
The combative stance that he's taking really doesn't do him any favors in resolving the issue.
Lawyer: "I've confirmed that at least one UK IP address is blocked."
Regulators: "We've confirmed that at least one UK IP address is not blocked."
In what world is the correct response "Dear regulators, you're incompetent. Pound sand." instead of "Can you share the IP address you used so my client can address this in their geoblock?"
> In what world is the correct response "Dear regulators, you're incompetent. Pound sand." instead of "Can you share the IP address you used so my client can address this in their geoblock?"
That would imply that the client actually would like to be contacted every time Ofcom found a leak in the geoblock. Not a good idea imho.
They don't agree that it is a public safety matter, or at least they've clearly taken the position that they don't care about that kind of public safety.
He's just pointing out that Ofcom's behavior is inconsistent with Ofcom sincerely believing it's a public safety matter either.
I get that it's satisfying to tell them to go away because they're being unreasonable. But what's the legal strategy here? Piss off the regulators such that they really won't drop this case, and give them fodder to be able to paint the lawyer and his client as uncooperative?
Is the strategy really just "get new federal laws passed so UK can't shove these regulations down our throats"? Is that going to happen on a timeline that makes sense for this specific case?
He says on his site that he wants the US to pass a “shield law,” I guess the idea must be to pass a law that explicitly says we don’t extradite for this, pass along the fines, or whatever.
It seems like inside the US, this must be constitutionally protected speech anyway. I’m not 100% sure, but it would seem quite weird if the US could enter a treaty that requires us to enforce the laws of other countries in a way that is against our constitution. Of course the constitution doesn’t apply to the UK (something people just love to point out in these discussions), but it does apply to the US, which would be the one actually doing the enforcing, right?
Anyway, bumping something all the way up to the Supreme Court is a pain in the ass, so it may make sense to just pass a law to make it explicit.
The British legal system is pretty inefficient. I'd probably just say sorry we'll block harder. That'll probably delay things for years, by which time there may be a different government, or a US shield law.
China is much more smartphone-centric than the US. QR codes are universal, WeChat and AliPay are the most common form of payments (online or in person).
Annoyingly, it depends on the type, sometimes with unintuitive consequences.
Move a unique_ptr? Guaranteed that the moved-from object is now null (fine). Move a std::optional? It remains engaged, but the wrapped object is moved-from (weird).
https://github.com/llvm/llvm-project/tree/main/llvm/lib/Targ...