I think maybe how you are conceptualizing design and how the GP meant it are not in agreement, and if you came to agreement on what it meant you wouldn't really disagree about the point either.
For example, I think design, as they mean it, could be described as "how to get that thing we care about". The correct amount of design depends on how exacting the outcome and outputs needs to be across different dimensions (how fast, how accurate, how easy to interpret, how easy to utilize as an input for some other system). For generalized things where there's not exacting standards for that, AI works well. For systems with exacting standards along one or more of those aspects, the process of design allows for the needed control and accuracy as the person or people doing the work are in a constant feedback loop and can dial in to what's needed. If you give up control of the inside of that loop, you lose the fine grained control required for even knowing how far you are away from theoretical maximums for those aspects.
If you have a complaint against "scientists" as hsme homogenous group, I think I'm going to have to ask you to explain how these particular scientists did not do that, and why you would think this is a problem of scientists (a label for a largelt disparate group not connected through any specific communication or hierarchy and mostly in output) in general?
The first time I ever heard The Glitch Mob I had such a clear memory of this games soundtrack come to mind that I mentioned it to my brother soon after (as it was his commodore and his copy of the game I was playing when I was young). I'm not even sure if the song I heard even sounds like the game soundtrack particularly closely, but the connection in my mind was very strong.
Well, I've taken to describing the best responsible use of AI to help your work as though you have an executive assistant, so I can see why people would come to that conclusion. I don't tend to think of booking flights for that though, I tend to think of asking them to gather information and present it to me so I can review it for whether it's appropriate to include, probably with changes, in whatever I'm working on. Perhaps an executive assistant isn't the right term for that, or perhaps it's just that different people and different industries have vastly different ideas of how to make use of an executive assistant. I don't know enough to answer that.
It's not quite as straightforward as that though. You're also required to pay a large sum up front to get the worker, have to pay for room and board and health for the worker, including children of workers which while they are investments that may eventually pay off, are mostly cost sinks until at least a few years have passed. There's more of a trade-off that might be immediately obvious when you dig into the reality of what it took.
Salved are free in neither up front nor ongoing cost, just as industrial equipment is not. It comes down to costs. Industrial equipment that is most costly than slaves seems unlikely to supplant them based on monetary incentives alone, while once it is less costly it's just the social economic momentum which needs to be overcome, which is likely a matter of time.
Importantly, I think there's only so much advancement you can get out of people by investing in economies of scale and iterating on process (and people, as icky as that idea is), while there's a huge amount of advancement to get out of machinery, including moving to whole new categories of machinery (which depending on how far you want to take the "slaved are machines" metaphor is waht a shift away from slaves was in the first place). In that respect maybe what you're noting is just that the shift from slaves to machines was the first in an iterative process which is speeding up over time.
> If we'd abolished slavery in Roman times we might have terraformed Mars by now.
I think maybe the right was to look at it is if we were able to abolish slavery and keep the same output (which might have required an economic or social system that incentivized farm consolidation for economies of scale that plantations were able to more easily achieve), then yes, we would have terraformed mars by now, but probably just because we happened to be along the tech tree earlier in the timeline.
Identical might be a bit strong. It's only identical if we signed a law that made oil and gas illegal tomorrow. There are definitely parallels, but this is much more of a normal market situation where most things are handled through incentives, not regulation to such an extreme degree we make the common immediately illegal.
Perhaps most importantly, it not being an immediate change allows the entrenched interests time to shift their strategies and portfolios over time to take advantage of the more economically advantageous option. Many people aren't happy with the time frames that generally requires, but they also seem to be very happy with reliable energy and and economy that doesn't collapse overnight and having invested a year or two ago in a car which would become worthless tomorrow.
> After abolition, the South's per capita productivity dropped substantially, and remained 20% lower per capita in 1880 than it had been in 1860.
I wonder how much of that was because of economies of scale (Even if it's forced scale). Plantations are large and have many workers, and can scale without having to worry (to a degree) about retaining workers, since workers are for most intents just machines you invest in and pay to keep up in that system, which allows for easier scaling.
We've seen increasing consolidation of farms into large entities over the centuries, so perhaps this was just a system that made that much, much easier to do.
Why would we assume an LLM, even one that doesn't appear to have a bias like that built in, doesn't have one? Just because we can't identify it immediately, does not mean it doesn't exist.
Groups of people can and do have bias, but I also think it's much harder to control the outcome (for better or worse) when inputs are more diverse.
There very likely is existing research into evaluating political bias in LLMs, not too sure, but I do think it's very possible to have an evaluation framework that could test LLMs for political bias and other biases. Once we have such a test and an LLM that passes it, we can be certain (to some confidence, for some topics, for some biases, etc etc) that the LLM won't be biased.
For humans, there is no such guarantee. The humans can lie, change their mind, etc. See Wikipedia, where they talk about how they are not biased, they have many processes that ensure no biases, blah blah blah, and it turns out they are massively biased, what a surprise.
Of course, who evaluates the evaluators/evaluation frameworks comes into play but that's a much easier problem.
> See Wikipedia, where they talk about how they are not biased, they have many processes that ensure no biases, blah blah blah, and it turns out they are massively biased, what a surprise.
It's clear you have some unfounded issue with Wikipedia. They are not "massively biased", that's a talking point propelled primarily by the right/far right because of a desire to rewrite history to match their ideological needs.
Saying "there very likely is existing research into evaluating political bias in LLMs" essentially means very little because
1. By your own admission you can't even say for sure that such research is actually happening (it probably is, but you admit you don't actually know)
2. There is no guarantee such research will lead to anywhere anytime soon
3. Even if it does, how does a means of evaluating bias in LLMs provide a path to eliminating it?
There has been lots of discussion about wikipedia’s bias in HN and elsewhere for years and I’m not going to rehash all of that.
> […] AI) as a viable replacement for the status quo.
Given that the status quo is clearly biased and structurally unwilling to be unbiased due to existing political affiliation, even an AI that is not evaluated all that well will be better. It can only get better from this status quo, so it’s a fine argument.
Discussion doesn't constitute consensus or conclusion - as I said several comments up, widespread bias in Wikipedia is a talking point propagated by those with an agenda to distort factual accuracy - people like Musk have hardly been subtle about this being their objective.
> even an AI that is not evaluated all that well will be better
This is just intellectual laziness. If you don't like Wikipedia that's fine, but if you're going to make the effort of characterising it as such on a public forum, the least you can do is make an effort to that point. This certainly isn't a "fine" argument at all.
From the description given, "The developer hired contractors who didn't know what they were doing and ignored stop work orders when the city learned of the problems" seems like it might have a lot to do with it.
For example, I think design, as they mean it, could be described as "how to get that thing we care about". The correct amount of design depends on how exacting the outcome and outputs needs to be across different dimensions (how fast, how accurate, how easy to interpret, how easy to utilize as an input for some other system). For generalized things where there's not exacting standards for that, AI works well. For systems with exacting standards along one or more of those aspects, the process of design allows for the needed control and accuracy as the person or people doing the work are in a constant feedback loop and can dial in to what's needed. If you give up control of the inside of that loop, you lose the fine grained control required for even knowing how far you are away from theoretical maximums for those aspects.
reply