Of course it's doing something for you. Room to defrag other areas of RAM, room to load something new without moving something else out of the way first.
Your perspective sounds like the concept that space in a room does nothing for you until/unless you cram it full of hoarded items.
Yeah, sed (and friends) browbeat everyone into learning regex (which PERL then refined).
I think it might be more cognitive load than it is worth to expect everyone en masse to learn another single-line-punctuation-driven-language to perform everyday tasks with.
That sounds like a mistake which would be easily to make at the end of the line, unless you are contrasting input stream redirect against cat regardless where it's written on the line?
I would argue that the segment of the market whose purchases incentivize personal responsibility on their PCs is outweighed by the segment of the market blowing their disposable income on tablets and smartphones who just want things to work and want whatever they see other people using on social media.
We both know which segment of the market the large companies want to win that battle. They want to sell rented compute resources through nothing but impossible-to-locally-administrate devices where every sensor spies on you and it's impossible to store any data or documents locally, let alone privately.
Even One Drive is pushing hard to literally erase your hard drive and only host your documents on their servers.
I have zero actual experience in training models, but in general, when parallelizing work: there can be fundamental nondeterminism (e.g., some race conditions) that is tolerated, whose recording/reproduction can be prohibitive performance-wise.
The 9900 was exactly contemporary with the LSI-11 CPU. Both TI and DEC were taking advantage of new LSI gate-counts to move discrete TTL CPUs into one chip.
The 990 series of minicomputers were competing with PDP-11s (Though DEC had highest market share, I believe 33% of the whole mini market?)
The 9900 was condensed in 1975 and went into the low-end 990/4. The higher end 990/9 and 990/10 were always going to be discrete TTL as the 9900 didn't support memory protection or mapping to the 2MByte total address space.
TI was always conscious of not challenging IBM head-to-head in minicomputers. Internal memos always projected TI's plan for its minis to occupy a space well below the latest IBM mainframes. From 1980, the planned 990/12 would arrive just as IBM delivered more compute power in their low-end... this was intentional, supposedly because IBM was the chief driver of TI's transistor business!
I'm curious about your thoughts on voluntarily donating the excess wages that you perceive earning.. and perhaps not directly to the US government (which is — to put it simply — not in a healthy state of mind at the moment), but instead to charity organizations that you can vet and trust?
Obviously actually vetting these organizations to make sure that your dollar accomplishes what you wish of it remains a Very Hard Problem, but at least while making baby steps from where we are right now (with our dystopian government) increases in taxation would not constitute a small step in the right direction.
EG: a better environment might look like a healthy government being supported by higher taxes than we see today, but without that first "healthy government" component the latter cannot be a net positive.
> Maybe you can expand a bit on how you are defining free market.
Not OP, but just look at a company town as an example in a bottle.
When the rich and powerful control the means of production so completely that they are the only people one can buy what one needs from, then in what way can the exchanges still be called "voluntary" and in what way is "mutual benefit" achieved vs the lesser of two evils: "perpetual debtorship that one must endlessly toil to slow the progress of" vs "abject starvation"?
At the end of the day consent and free will are actually really complicated topics, and they can be surprisingly easy to pervert by unequal power dynamics. The market cannot be free whenever feudalism forms to take its place.
My factory produces squares, and every square is between 1ft and 3ft in side length.
Now what is the probability that the next square it outputs will be between 1ft and 2ft long?
The probability is zero percent, of course. Because my factory only produces squares with a side length of exactly 2.5ft (to within a micrometer tolerance, hooray!), day in and day out.
And as anyone can easily verify, every single one of those squares is between 1ft and 3ft in side length.
Notice how I didn't have to even begin to talk about areas?
The video's thesis is simply that "Talking out of your ass when you have insufficient information has the capability of backfiring sometimes: oh the horror" and I find the subject approximately as uninteresting as the fact that different interpolation methods (nearest neighbor, bicubic, "ask AI image gen to fill in the gaps", etc) are capable of inventing completely different false details into an image or dataset.
But I probably only find it equally uninteresting due to the claims being isomorphic.
When you don't have enough data, guessing at what is missing can be incorrect, and guessing in different ways can be incorrect in different ways, and you have to allow that to wash out as enough genuine data arrives (which means washing out the differences between potential methods of interpolation) and maintain your error bars correctly in the meantime instead of throwing them away.
So to loop back to the start: the probability that the next square will be between 1ft and 2ft is 50% plus or minus 50%, which is just an over-engineered way of saying "there is literally not enough information yet offered to make a guess of any trustworthiness at this point".
Your perspective sounds like the concept that space in a room does nothing for you until/unless you cram it full of hoarded items.
reply