Pointless superficial review, standard for Wired, Verge etc - no brightness uniformity check, no color uniformity check, no color accuracy check, no coating grain check.
Meanwhile a good number of reports mention terrible uniformity issues with that model.
Rtings publishes charts in abundance, but the subjective quality of a monitor is more important. For example, a chart will tell you a monitor has low color deviation from sRGB after calibration, but won't tell you that the monitor UI takes 10 laggy clicks to switch from sRGB to DCI-P3 and will reset your selection every time you toggle HDR mode.
I admire Rtings' attempts to add more and more graphs to quantify everything from VRR flicker to raised black levels. They were helpful when I last shopped for a monitor. But the most valuable information came from veteran monitor review sites such as Monitors Unboxed and TFTCentral.
They do a good job of supporting whatever comparisons you want to make, which is useful if you have different preferences; however, I do think they have clear conclusions in many cases, to capture strengths, weaknesses, and "best of" recommendations.
Exactly. Their ten-point scales have no obvious relationship to the underlying measurements, where measurements are even provided, and they rescale the points every so often. (They call this their "methodology version.")
I've noticed that, when a new (expensive, high-commission-generating) product comes out, it often has middling scores at first, and then, a few months later, they've revised their methodology to show how much better the pricey new product is.
1) I trust rtings to not change their position on the basis of what makes them money; that trust is their whole brand.
2) I have not seen products jump from middling to high before, but I have seen the scores change with new methodologies, and sometimes that has the net effect of lowering the scores of older devices. Typically, that seems to represent some change in technology, or in what people are looking for in the market. For instance, I would expect (have not checked) that with the substantially increased interest in high-refresh-rate monitors, the "gaming" score has probably changed to scale with gamer expectations of higher frame rates. That would have the net effect of lowering the score of what was previously considered a "good" monitor. This seems like an inherent property of improving technologies: last year's "great" can be this year's "good" and next year's "meh".
Personally, I never pay much attention to the 0-10 scores in the first place, and always just make tables of the features and measurements I care about. The only exception is for underlying measurements that are complex and need summarizing (e.g. "Audio Reproduction Accuracy").
I've just bought one. The color uniformity is terrible across the screen. It's tolerable to me, but not what I'd expect at the price. Everything else is quite fantastic though.
Ok fine yes, I missed it, but average Delta-E measured without specifying what mode it was measured in, what is the maximum Delta-E (not average - I've used monitors with average dE below one with massive greenish/pinkish color tints, as maximum dE on some colors well exceeded 4), what was deltaE away from the center, what was gamma and colorspace coverage error, is pointless. Check prad.de, they do it right way.
Meanwhile a good number of reports mention terrible uniformity issues with that model.