In fairness, hindsight is 20/20, and it's very easy to look aghast -- or in smug condescension -- at the failed predictions of yesteryear. But how many of us would have called all of the shots on that list correctly back in 2006?
Furthermore, it is human nature to suffer from an anchoring bias when looking at a list like this one. We immediately assign more weight to the 2 or 3 "disastrously wrong" predictions in the list of 10 than we do to the other 7. Hell, we probably assign 99% of the weight on that list to the Facebook prediction. It's the most visibly and obviously wrong. So it anchors our perception of the entire list.
I doubt there's anyone out there who gets his predictions right 100% of the time. And I bet there are plenty of highly successful prognosticators with at least one or two hilariously bad calls on their track records.
I'm not defending this list or its author, per se, but just pointing out that everyone gets it wrong sooner or later. Throw enough shots at the hoop, and you're going to miss a few of them. Granted, you could certainly argue that tech journalists should shy away from the predictions game and focus solely on reporting. But where's the fun in that?
Perhaps, but then, the expression "caveat emptor" comes to mind. As the readers of such predictions, we should be taking these things with a healthy grain of salt.
That's the unspoken compact between the prognosticator and his audience. It's his job to make as good of a guess as he can; it's our job to keep in mind that he's guessing.
Actually, I'd be happier if I knew of prognosticators who did put their money where their mouth is. Predictions accompanied by bid/ask spreads would be vastly more useful than predictions intended for immediate entertainment (ie, all the ones the pundit is unwilling to bet on).
As it is, though, there are just a few mildly popular websites that enable that.
Well, the counterargument to that is conflict of interest. Do I really trust the predictions of someone with financial interests in the outcome he's predicting?
I think there's a place for both types of reporting in our media. You've got Seeking Alpha for financial predictions by interested parties (who will at least disclose their interests, which is nice). Then you've got the mainstream media for general, ostensibly unbiased reporting and predictions.
(Experience has taught me that getting into a debate about the presence or absence of bias in the mainstream media is a classic blunder akin to starting a land war in Asia, or going in against a Sicilian when death is on the line. So I'll refrain from getting that started.)
The thing about trying to move the market by betting is that if someone calls your bs, you're ruined; and all it takes is one knowledgeable person with moderate resources (since you're putting out big odds and lots of money if you really want to influence things). Prediction markets have historically been extremely resilient against manipulation attempts.
Furthermore, it is human nature to suffer from an anchoring bias when looking at a list like this one. We immediately assign more weight to the 2 or 3 "disastrously wrong" predictions in the list of 10 than we do to the other 7. Hell, we probably assign 99% of the weight on that list to the Facebook prediction. It's the most visibly and obviously wrong. So it anchors our perception of the entire list.
I doubt there's anyone out there who gets his predictions right 100% of the time. And I bet there are plenty of highly successful prognosticators with at least one or two hilariously bad calls on their track records.
I'm not defending this list or its author, per se, but just pointing out that everyone gets it wrong sooner or later. Throw enough shots at the hoop, and you're going to miss a few of them. Granted, you could certainly argue that tech journalists should shy away from the predictions game and focus solely on reporting. But where's the fun in that?