We have been down this road like three times already in this administration. But the dip, TACO, profit. If this isn’t obvious by now then you’re leaving money on the table.
I find it works pretty well, at least that I’m consistently surprised I have any signal at all. Sometimes I need to disable private relay to get it to work with all the switchovers. There’s also frequent swapping between the 5G signal and the Wi-Fi that comes auto-enabled on my phone by my carrier who provides a client certificate entitling me to it.
I’m not sure about the privacy implications of this whole setup. It’s basically turned the underground into a surveillance dragnet that can hoover up all sorts of interesting metadata… hostnames, hardware identifiers, traveling patterns, DNS queries, SNI requests… and an untold amount of unencrypted communications across weird protocols and devices..
It’ll be interesting to see how far they take this cat and mouse game. Will “model attestation” become a new mechanism for enforcing tight coupling between client and inference endpoint? It could get weird, with secret shibboleths inserted into model weights…
Cat and mouse indeed... such is the way of the internet nomad
There ain't no client validation mechanism you can't fake with enough time, patience, reverse-engineering, and good-old-fashioned stubborn hacker ethos.
> Somewhere in GitHub's codebase, there's an if-statement checking when a repository was created to decide which ID format to return.
I doubt it. That's the beauty of GraphQL — each object can store its ID however it wants, and the GraphQL layer encodes it in base64. Then when someone sends a request with a base64-encoded ID, there _might_ be an if-statement (or maybe it just does a lookup on the ID). If anything, the if-statement happens _after_ decoding the ID, not before encoding it.
There was never any if-statement that checked the time — before the migration, IDs were created only in the old format. After the migration, they were created in the new format.
It’s also a bet that the capex cost for training future models will be much lower than it is today. Why invest in it today if they already have the moat and dominant edge platform (with a loyal customer base upgrading hardware on 2-3 year cycles) for deploying whatever future commoditized training or inference workloads emerge by the time this Google deal expires?
The alternative is delivering no items. Spamming all your friends with $0.83 items is a waste of resources, period. When Temu did this on an industrial scale the USPS was not happy about it and it shouldn’t be any different when you’re abusing the postal system as a gag.
Dell monitors are very hit or miss for me. I’ve got two with very similar model numbers to the OP. One of them has a straight vertical line of red pixels at (30%, 0%). The other doesn’t have an integrated webcam.
Meanwhile I’ve got an MSI OLED 32” 240Hz @ 4k monitor which was super expensive but is absolutely incredible. It takes getting used to a monitor that performs a maintenance routine on itself any time you leave it active for more than a few hours. But it’s great for work (with some aggressive zoom levels) and gaming (with some aggressive black point levels).
Off topic, but JFYI, with last year's firmware update (OLED CARE 2.0), you can now delay the refresh notification for up to 24 hours. I haven't seen the notification pop up since updating.
I do love that zoom feature that shows the avatars of who’s joined already. Although I don’t like the game theory of it when every attendee is watching those icons…
reply