This appears to be a poorly-sourced "research" page put together by a marketing agency. There's really nothing there about the methodology of this work, and there's plenty to suggests the authors have very little knowledge of firearms (a fired bullet in the first animation is shown with a casing, there's no discussion of bullet aerodynamics which are going to be very different for most sharp-tipped rifle bullets and for handgun rounds, etc) or physics (e.g., no accounting for energy when evaluating the velocity at which a bullet can penetrate skull).
There's something to be said about compression algorithms being predictable, deterministic, and only capable of introducing defects that stand out as compression artifacts.
Plus, decoding performance and power consumption matters, especially on mobile devices (which also happens be the setting where bandwidth gains are most meaningful).
While that is kind of true it is also sort of the point.
The optimal lossy compression algorithm would be based on humans as a target. it would remove details that we wouldn't notice to reduce the target size. If you show me a photo of a face in front of some grass the optimal solution would likely be to reproduce that face in high detail but replace the grass with "stock imagery".
I guess it comes down to what is important. In the past algorithms were focused on visual perception, but maybe we are getting so good at convincingly removing unnecessary detail that we need to spend more time teaching the compressor what details are important. For example if I know the person in the grass preserving the face is important. If I don't know them then it could be replaced by a stock face as well. Maybe the optimal compression of a crowd of people is the 2 faces of people I know preserved accurately and the rest replaced with "stock" faces.
Remember the Xerox scan-to-email scandal in which tiling compression was replacing numbers in structural drawings? We're talking about similar repercussions here.
This reminds me of a question I have about SD: why can’t it do a simple OCR to know those are characters not random shapes? It’s baffling that neither SD nor DE2 have any understanding of the content they produce.
You could certainly apply a “duct tape” solution like that, but the issue is that neural networks were developed to replace what were previously entire solutions built on a “duct tape” collection of rule-based approaches (see the early attempts at image recognition). So it would be nice to solve the problem in a more general way.
> why can’t it do a simple OCR to know those are characters not random shapes?
It's pretty easy to add this if you wanted to.
But a better method would be to fine tune on a bunch of machine-generated images of words if you want your model to be good at generating characters. You'll need to consider which of the many Unicode character sets you want your model to specialize in though.
With compression you often make a prediction then delta off of it. A structurally garbled one could be discarded or just result in a worse baseline for the delta.
I was told (on the Unstable Diffusion discord, so this info might not be reliable) that even with using the same seed the results will differ if the model is running on a different GPU. This was also my experience when I couldn't reproduce the results generated by the discord's SD txt2img generating bot.
I'm not sure about the different GPU issue. But if that is an issue, the model can be made deterministic (probably compromising inference speed), by making sure the calculations are computed deterministically.
Yeah, it's usually a combination of these two factors. You live beyond your means for a decade or five, and then something else goes wrong: natural disaster, war, crop failure, sanctions, whatever. Situations that could be survivable otherwise trigger a downward spiral because you were pushing your luck before.
The history of coinage spans thousands of years. Despite the popular tales, spot barter was almost certainly not the basis of any real economies.
There is definitely some bias in the availability of data, but it was comparatively harder to end up in a hyperinflationary spiral in the era of commodity or representative currencies, and these were commonplace until the twentieth century.
There were instances of money suddenly losing all value due to the failure of the issuing state (e.g., confederate dollar banknotes), but that's probably a different story.
Many countries that suffer hyperinflation keep the historical name of their currency, but establish some exchange rate between the "old" and "new" money. Zimbabwe went through four cycles - currency codes ZWD, ZWN, ZWR, and ZWL.
Per Wikipedia: "The final redenomination produced the "fourth dollar" (ZWL), which was worth 10^25 ZWD (first dollars)."
The ZWL itself was subsequently largely abandoned, too. I believe you'd have more luck transacting in foreign currencies.
"Allied" in the sense of having a USSR-installed puppet government propped up by the massive presence of Soviet troops. This was a part of the concessions made by the West to Stalin, not an expression of the will of Polish or German people.
Teflon is very non-reactive and any small pieces you ingest should pass through unchanged pretty quickly. You likely ingest a lot more plastic from other sources.
Many LED lightbulbs make claims about their expected lifetime. Except these numbers are often a fantasy. The LED itself may last almost forever, but the capacitors commonly go bad in a year or so.
I suspect that would be the reality with a lot of the proposed mandatory labeling for electronics, too.
They also severely overdrive the LEDs to get more brightness, at the cost of both reduced efficiency and lifetime.
At least the Phoebus Cartel had an ostensible explanation (efficiency) for what they did. Doing the same with LED lighting is pure corporate greed.
Some of the indicator LEDs on some of my electronics are many decades old, yet they are still functioning like when they were new. Clearly LEDs can last a long time, but there wouldn't be any profit in that.
Here's some interesting discussion about the "Dubai Lamp", an attempt at going the opposite direction and actually making LEDs last significantly longer: https://news.ycombinator.com/item?id=27093793
So when I buy a 600 lumen LED bulb that burns 7 W, I'm not only getting hosed because they're overdriving the LEDs but I'm also using over twice as much electricity as I could be?
Not really. The "overdriving" (really just choosing a higher point on the current/output curve) is part of how they deliver that 600 lumen. If you put half as much electricity through those same LEDs they would in theory last longer. You could deliver 600 lumens for under 7w (but not as low as half) using a more expensive array of LEDs driven less hard and still get the longer lifetime, but it's not easy to be really sure whether the upfront cost and embedded energy would always be justified.
Wouldn't this sort of be a prisoner's dilemma sort of situation? The cartel hinges on every producer being complicit, if they are they get all long term small advantage, if they aren't, they and they alone get a huge short term advantage to the detriment of everyone else.
If a single one of them doesn't play ball and start selling "forever lamps" that last a hundred years (slap some patents on running LEDs at their rating, why not), they'll effectively salt the earth for the entire market.
No, it also works when non-optimal designs are very common in the market. (Not helped with the fact that especially in electronics many brands have decreased in (perceived) quality, so even if something performed well people don't necessarily trust that newer versions are too, which makes actually good brands also less sticky)
Bad brands aren't sticky in this situation, are there. (I don't know whether good brands are sticky here.) So there's no extra incentives for planned obsolescence, because the next purchase is more than likely to go to a competitor.
They’re saying all brands use planned obsolescence to force consumers to buy lightbulbs (from any brand) more often, increasing sales for the whole industry, not one specific brand.
This is how the Phoebus Cartel worked for incandescent bulbs. Every brand was in on it, it wasn’t about improving market share for specific brands.
I'm surprised so many people's experience of LED bulbs is short lifetime. Most of the LED bulbs I ever bought have lasted many, many years. I only had one type ever fail and those looked clearly inferior in build to all the other types. ("free" with some light fittings, not my choice). Took one of those apart, no capacitors in the design any more but dodgy wire connections that creep with the hot-cold on-off cycle in an under-ventilated fitting. So am I a statistical outlier, or are most people getting the very worst dreg-quality bulbs, or are there a lot of very enclosed fittings out there cooking the lamps?
I've got a lot of different brands of LED light bulbs and I think maybe one has "burned out" over the years. I have to wonder if maybe the power in their house is less "clean" than it should be, or if their area has a lot of power spikes or brownouts and they just don't realize it.
That's an interesting possibility I hadn't considered. I don't really have a handle on how robust these things innately are but I'm sure there is no room in the average bulb's bill of materials for any special handling for rough power.
Recessed lighting fixtures are a common cause of problems with LEDs due to poor heat dissipation. But a lot of newer homes have recessed lighting, and in at least some parts of California, the fixtures are now required to be sealed (I guess for overall house energy efficiency).
A lot of these questions touch on important topics, but I think there are too many of them and they are far too specific - to the point where you might be inadvertently signaling some kind of unreasonable inflexibility ("you better be using a particular bug tracking system or else I'm out!"). I'd suggest generalizing and combining many of these. Asking the interviewer to walk you through the development process can be more revealing and is less adversarial than a rapid-fire of 30 questions.
Also, the likelihood that a candidate will be informed of any non-public plans to sell the company are slim...
Yeah they kind of run the gamut from "you can look this up on LinkedIn" (company size) to ones I'd defer to the CFO "Any plans to sell?" where I doubt you'd get an answer.
One thing I'd add -- don't be afraid to ask the company recruiter questions too. They should know about the training/conference budget, dress code, etc. That'll make better use of your Q&A time with the hiring manager.
There's a lot more to that. A bank doesn't want the "back" button to work forever; they want to control the lifetime of your session, ideally on the server. Google wants to let you sign into multiple accounts on the same origin. Many others want to have seamless single sign-on across several of their web properties. Sometimes, you want the change of your password to invalidate other sessions (say, when recovering a compromised account); other times, you don't want to kick out your smart thermostat and have to set it up from scratch.
Admittedly, there are some simple use cases where HTTP auth is all you need, but it's just way too inflexible, unless you turn it into some mammoth spec that is never going to be as flexible and tempting as managing user identity yourself.
Especially since HTTP auth doesn't actually mean you can stop doing that anyway. You're still handling account creation, password checking, all the abuse / bot detection bits... all you're getting rid of is the sign-on and logout functionality, which is really not that complicated to begin with.
Maybe we don't want them to be able to do any of that
And I think you are missing the point, the goal it's not to standardize logins, it's about making impossible for servers to know my password, hence impossible passwords leaks
That would allow people to reuse strong passwords, and not need passwords managers, because that's what they are doing anyway!
> Maybe we don't want them to be able to do any of that
"We" who? Application owners want that, browser vendors want that (their greatest fear is that mobile will eat the web, so they don't want to make the platform less flexible)... and users generally don't mind.
> impossible for servers to know my password, hence impossible passwords leaks
That would require deeper architectural changes to HTTP auth, but is probably a reasonable goal. That said, it's more readily approximated with unique passwords + having a good password manager. The main risk of password leaks is not that they make that particular breach worse (since the attackers can just grab your data), but that passwords are reused too often.
Federated login is another approximation, where the password is only known to your identity provider, not to every identity consumer. It's modestly successful for some lower-value services.
Depends if that is required. For most enterprise software, that nowadays is more and more web based, you don't need all of that. Accounts are created by the system administrator, the password check is fine with the default mechanism of Nginx or Apache with a .htpasswd file, bot detection and all other things are not really that necessary, especially if the page is not exposed to the internet but only in a LAN.
Beside that, if you need a more sophisticated authentication mechanism nowadays your default is to go with something that uses the Oauth protocol: so I guess the next step would be to standardize that protocol and have it integrated as a browser API so that a user doesn't even have to insert a password.