That's like saying "so, exercise more" upon the invention of fast food. Maybe you will, that's great. But society is going to be rewritten by the lazy and we all will have to deal with the side effects.
Fat shaming works, no matter what the bleeding heart body acceptance social justice warriors say.
AI shaming also works. I do it when people say “I asked GPT/Claude and”
If I wanted the bot’s opinion I could ask the bot myself. I now think less of you for thinking that it is somehow acceptable to use a bot to do your thinking for you, because when I ask YOU something I want to know what YOU think.
The invention of fast food does not change anyone's ability to excersize. When fast food was invented people excersized way more than they do today.
Time constraints have caused an increase in fast food consumption and a reduction in excersize.
Both issues then seem to be addressed by coercion to change behaviour when what is needed is a systemic change to the environment to provide preferable options.
I think this is a great question to ask and maybe I need my own blog to post about these things as I might reply with a big comment
Making Unpublished Software for Themselves
One issue is, I think maybe a lot of people are making software for themselves and not publishing it - at least I find myself doing this a lot. So there's still "more software produced than before", but it's unpublished
LOC a Good Measure?
Another question is like Lines of Code, about if we best measure AI productivity by new packages that exist. AI might make certain packages obsolete and there may be higher quality, but less, contributions made to existing packages as a result. So actually less packages might mean more productivity (although, generally we seem to think it's the opposite, conventionally speaking)
Optimizing The Unnoticeable
Another issue that comes up is maybe AI optimizes unnoticeable things: AI may indeed make certain things go 100x faster or better. But say a website goes from loading in 1 second to 1/100th of a second... it's a real 100x gain, but in practice doesn't seem to be experienced as a whole lot bigger of a gain. It doesn't translate in to more tangible goods being produced. People might just load 100 pages in the same amount of time, which eats up the 100x gain anyway (!).
Bottleneck of Imagination
I think also this exposes a bottleneck of imagination: what do we want people to be building with AI? People may not be building things, because we need more creative people to be dreaming up things to build. AI is only fed existing creative solutions and, while it does seem to mix that together to generate new ideas, still the people reading the outputs are only so creative. I've thought standard projects would be 1) creating open source alternatives to existing proprietary software, 2) writing new software for old hardware (like "jailbreaking" but doesn't have to be?) to make it run new software so that it can be used for something other than be e-waste. 3) Reverse engineering a bunch of designs so you can implement some new design on them, where open source code doesn't exist and we don't know how they function (maybe kind of like #1). So like there is maybe a need for a very "low tech" creation of spaces where people are just regularly swapping ideas on building things they can only build themselves so much, to either get the attention of more capable individuals or to build up teams.
Time Lag to Adapt
Also, people may still be getting adjusted to using AI stuff. One other post detailed that the majority of the planet does not use AI, and an even smaller subset pays for subscriptions. So there's still a big lag in society of adoption, and of adopters knowing how to use the tools. So I think people might really experience optimizing something at 100x, but they may not know how to leverage that to publish it to optimize things for everyone else at 100x amount, yet.
Social Media Breakdown?
Another problem is, I have made stuff I'd like to share but... social media is already over-run with over-regulation and bots. So where do I publish new things? Even on HN, there was that post about how negative the posters can be, who have said very critical things about projects that ended up being very successful. So I wonder if this also fuels people just quietly creating more stuff for their own needs.
Has GDP Gone Up or Time Been Saved?
Do other measures of productivity exist? GDP appears to have probably only gone up a bit. But again, could people be having gains that don't translate to GDP gains? People do seem to post about saving time with AI but... the malicious thing about technology is that, when people save 10 hours from one tool, they usually just end up spending that working on something else. So unless we're careful, technology for some people doesn't save them much time at all (in fact, a few people have posted about being addicted to AI and working even more with it than before AI!).
Are There Only So Many "10x Programmers"?
Another issue is, maybe there are only a minority of people who get "10x" gains from AI; at the same time, "lesser" devs (like juniors?) have apparently been displaced by AI with some layoffs and hiring freezes.
Conclusion
I guess we are trying to account for real gains and "100x experiences" people have, with a seeming lack of tangible output. I don't think these things are necessarily at odds with each other for some of the aforementioned reasons written above. I imagine maybe in 5 years we'll see more clearly if there is some noticeable impact or not, and... not to be a doomer / pessimist, but we may have some very negative experience from AI development that seems to negate the gains that we'll have to account for, too.
I've looked forward to the destruction of the credential system as it seemed "highly unjust" and like a top barrier to people's freedom (although it hardly seemed to be talked about as such). It locked people in to specific industries or locked them out in distasteful ways.
So I largely view AI in a positive light as cutting out this middle man to some extent.
The process might rather be:
IQ → skills → heritable wealth
Unfortunately the credentialed sometimes possess skills, but at other times "merely" possess the credential (so, they may not do a good job). Other people with skills might not possess credentials today and so society forcibly prevents them from using those skills (!) at times. It will be nice if AI nudges the "system" in to accwpting more work without required credentials.
Credentialing can still exist as a voluntary system and I don't per se object to that; it was more the involuntary aspects of credentials that have been off putting. (Although to some extent e en "voluntary" credentiala may not be as voluntary as there may be capital and biological constraints, as the article might get in to).
This conversation about credentials might ultimately loop back to considerations of primitivism, i.e. arguments that involuntary credentials are necessary to a highly advanced technological society, and so if anyone likes "freedom", they must be more in to "primitivism" and against technology, which necessarily "enslaves" people to a credentialed system and dependence on a highly interconnected technological system. Stated otherwise: if the credentials are not optional, then our society is something of a collection of people immersed in "technological slavery", rathee than free people who might live without respect to credentials or technology.
Credentials are a way to externalize trust. The trend with AI is to further erode trust. There will be a reaction against this eventually, and it's likely that more mechanisms to externalize trust will be found, not that they will become unimportant.
I think the problem is, compare credentials with reputation. A programmer who possesses no credentials might create "good" software that is validated as "good" even by those possessing credentials. However, a person with credentials related to programming, might produce malicious software that betrays the "trust" of the credential. Thus, just like the fallacy of the labor theory of value, credentials do not inherently relate to the production of heritable wealth. AI shines a light on this disconnect, and that actually the involuntary nature of certain credentials (as opposed perhaps to credentials themselves) creates certain classical impediments to the creation of wealth.
Thus, maybe credentials might be thought to be "necessary but insufficient" for achieving certain goals or validating trust. But even in this above example, the credential was not even necessary for the production of "good" software. AI of course simply exposes this truth as it gives more direct access to skills and knowledge to the average person: the (involuntary) credential was never necessary for doing some of these things, and we are able to remove the necessity entirely in some cases.
It sounds like you have a particular bone to pick though you're only doing it by talking in generalities in the OP, and now you're talking about programming credentials.
I don't know of any required credential to write software or be a programmer.
When I think about credentials, I think about doctors and lawyers. In both cases, I'm going to demand that the people I work with are credentialed, and there is no way that I'm going to change that.
Can you give a specific example of something that requires a credential today that you would like to see relaxed?
It's mostly a problem with "requiring" the credentials by law
I have no problem with you wanting your doctors or lawyers to have credentials for your own needs
I take issue with people requiring me to seek doctors or lawyers with credentials, as that then has set up a compulsory system which reduces the quality and quantity of law and medicine produced, which has predictably created things like doctor shortages today (see news stories on doctor shortages)
So AI will fill some of these shortages if there aren't burdensome regulations put in place to prevent it from doing so (such as regulations that currently exist)
haven't been doing this but have been trying to turn lights down near bedtime and also to wind down before bed more, otherwise it feels like I didn't sleep as restfully
ehhh, for a lot of traditional Catholics neither Thiel nor Rome are Catholic currently so I think there would be disagreement with both sides here
I thought Thiel's argument was that the anti-AI crowd might tend towards a pagan primitivism (like with mentioning those like Greta) and authoritarian measures to stamp out technology with an Anti-Christ leader, emphasizing base physical pleasure over technological "progress". I guess that's one "End Times" possible trajectory.
Catholicism's not necessarily really for or against (classically) liberal democracies, with exception of specific configurations that might be condemned afaik with books like "Liberalism is a Sin" (liberalismisasin.com) or writings against the "heresy of Americanism".
p. 11 says, in contrast to a top comment here that claims there is no singular Anti-Christ figure: "the Sacred Scriptures speak of Antichrist in various places as being a particular person or individual."
Rome has been thought to have fallen to modernism with the Vatican 2 changes, which sets them up more for accepting or bringing about the rise of an Anti-Christ movement in the views of some traditionalists
reply