I broke that circle by having a sibling ultimately follow my recommendation of getting a ThinkPad T at a discount (prev-gen during a sale) and then letting them advertise it to the rest of the family.
If you ask me, for a comparable price range, the ThinkPad still is a much better pick than the MacBook Neo: that thing has no IO and not even enough RAM for nowadays light web browsing.
You're comparing a $1254-minimum laptop[0] with a $599-minimum laptop[1] and asserting that the one that's twice as expensive is nicer.
I'd expect it to be. In fact, I'd demand it.
(I'm ignoring the "old model, found cheaply" bit because that's entirely irrelevant. You can find old Macs on sale around, too, but that doesn't mean you can reasonably compare them to the MSRP of a brand new device.)
And I still stand behind the fact that, for that price, you've got a very competent device that is better specced for light use and friendlier for mom and pop (look, it has a HDMI, you can straight up connect it to the telly! Look, it has USB A ports, so that old camera, hard drive with the family pictures, old weird ergo mouse just works out of the box !).
Again, we’re not comparing a brand new Mac price to an old PC price. Yes, old will be cheaper. Old MacBooks are cheaper than new ones, too.
But for giggles, let’s look at the old PC.
Despite being heavier, wider, taller, thicker, slower, dimmer, lower resolution, hotter, older, and having less battery life, it is, indeed, $20 cheaper.
Put another way, there’s no way on earth I’d pick that over a MacBook Neo to save $20 at the cost of having a worse laptop in almost every way.
That's a valid opinion to hold. I think both machines are Pareto-optimal though. The ThinkPad will likely have a longer useful life because of its heavy build, extra I/O (each port gets less use), and upgradeable parts. The Neo clearly wins on power efficiency, battery life, resolution...
TBH, if I imagined I was the median casual user, I would also take the $20 marginal cost for the Neo. "Worse in almost every way" just depends on how you weight each individual parameter, which for me, is quite atypical.
I don't see why comparing prices between used and new options is unreasonable in this case. If I want a machine to do XYZ (without the stipulation that it be new), then an older model might well be better value. "In $CURRENT_YEAR, how can I get X processing power?"
Of course, old Macs should factor into that too. Also, it's a different story if I do want something brand new.
Here it’s because the old PC they picked is worse in every way than the brand new PC, except for RAM, which the Mac largely mitigates by having ludicrously fast flash hanging off the CPU. Of course an older, worse PC is going to be cheaper than a new Mac. (Except in this case, buying the boat anchor saves you a whopping $20. It’s not even better specs for the same price: it’s worse than the Apple gear that costs the same.)
If we want to compare new vs used, then how much would you have to spend to buy a brand new PC laptop as powerful as last year’s MacBook Pro?
> that thing has no IO and not even enough RAM for nowadays light web browsing.
You can literally open up every app (50+) on it and simultaneously edit 4k video without issues. It handles all of the pro apps really well. So it objectively can handle light web browsing just fine.
Same here, MacBooks are decent hardware but nowhere near so superior as to justify all the downsides and increasingly dark patterns Apple has been pushing left and right.
I agree that it isn’t as good as it was but compared to windows (with adds in the start menu, and two different settings menus for a decade as examples) it’s still better. More of a glass of warm cheap whiskey, than a glass of cool ice water in hell.
Reminder if one was needed that the future of your instant messaging shouldn't be above all "the same thing, but with more crypto" (e.g. Signal), but "less centralisation, then a reasonable amount of crypto" (federation e.g. XMPP or P2P ; though I don't know a P2P solution that's recommendable).
> OECD countries' past emissions are causing the warming we see today.
China passed EU's cumulative emissions in 2014, if I remember correctly. It's totally fair to blame industrialised countries for their share in causing global warming, irrespective if that happened in the early days of industrialisation and was propped up by dirty energy sources. Though, it's morally much harder to give a pass to countries polluting now using the same sources.
No it hasn't. Work mechanisation throughout history has resulted in a shift from manual labour to one that's more intellectual in nature. Modern AI believers pretend that it will take over those jobs soon as well.
This would essentially bring us to a cross-roads between, on one hand, a utopia with UBI and people not needing to work (because their labour is unnecessary), or a dystopia, where few technocratic "lords" own the means of work automation and rule over a submissive world.
I don't think it takes a genius to guess where this is heading in our current political climate.
Personally, I'm not scared about any of that, because I don't believe LLMs to be very potent as an AI tool. Robotic militias (remotely controlled by BI or AI) seem a much more tangible threat.
> How do you figure? 20 dollars/month is insanely cheap for what OpenAI/Anthropic/Google offer. That absolutely qualifies as "empowering a common person".
JFYI, LLMs still can't solve 7x8, and well possibly never will. A more rudimentary text processor shoves that into a calculator for consumption by the LLM. There's a lot going on behind the scenes to keep the illusion flying, and that lot is a patchwork of conventional CS techniques that has nothing to do with cutting edge research.
To many interested in actual AI research, LLMs are known as the very flawed and limiting technique they are, and the increasing narrative disconnect between this and the table stakes where they are front and center of every AI shop, carrying a big chunk of the global GDP on its back, is annoying and borderline scary.
This is false. You can run a small open-weights model in ollama and check for yourself that it can multiply three-digit numbers correctly without having access to any tools. There's even quite a bit of interpretability research into how exactly LLMs multiply numbers under the hood. [1]
When an LLM does have access to an appropriate tool, it's trained to use the tool* instead of wasting hundreds of tokens on drudgery. If that's enough to make you think of them as a "flawed and limiting technique", consider instead evaluating them on capabilities there aren't any tools for, like theorem proving.
* Which, incidentally, I wouldn't describe as invoking a "more rudimentary text processor" - it's still the LLM that generates the text of the tool call.
reply