As long as AI is controlled by Silicon Valley, it will never become culturally aware, because the people controlling it are not the slightest culturally aware. They think diversity sits in the skin colour, and that's about as far as they are willing to think about it, because they are intellectually lazy and have dead souls, without any curiosity for the world.
They train their AI on some false assumptions that fits their political faith, and foreigners who need to use their AIs will have to learn about the biases and work around them. Just like it is today.
He was great fun. He also had amusing tales of his "tech character's" adventures dating etc. Back in the '90s, when we were all young, his weekly column was a highlight for me.
I struggled to get through the culturally aware section. Apparently the technology should define its responses in terms of what ever identity group the user has been lumped into. For some reason if it doesn't do this, it won't have the same reach.
I remember some visions of the future from Microsoft from the early 2000s. They even demoed some cool gear with the Kinect and I even managed to try out the table at their campus.
Now I speak to others SWEs in tech from semis to hyperscalers and it feels so bleak. All ethereal bollocks with crypto and AI. All designed to make someone else money. Where is the cool wearable, IoT, actual human interaction tech.
We should be having magic mirrors, that style of coffee table... Heck I would settle on smarter green tech.
Everything honestly seemed more fun back then from the state of the web, gaming.
I have never been so bored with the state of tech.
I kind of want tech to become invisible and just do things without my explicit order. So, AI progress would be exciting. But it feels like 95% is gimmicky. Which is fair, people trying to find a way to extract real value from tech advancements.
Funny. I want less invisible tech doing strange things in subtle ways, and I want things to do what I tell them, on my orders, and do so rapidly. I'm sick of invisible updates, invisible data collection, etc.
I want tech to figure out what I want and do that invisibly. I don't want all the data collection. I also don't want to have to tell the clothes washer to do laundry, I want it to detect clothes on my floor and return them to the correct place clean. Note that this needs to be correct, it needs to understand the difference between clothes I'm going to wear again and cloths that I want washed.
Similarly, the kitchen robot should figure out a healthy meal we will enjoy, get it on the table for our dinner and then clear the table when we are done eating: putting the leftovers away and clean up everything else.
The above is what slaves could do 150 years ago. We have lost much - it is worth not having slaves in society, but if you would have had a slave (or servant) 150 years ago your life is worse in some ways for that progress and I want it back. I also want this affordable for everyone.
>The above is what slaves could do 150 years ago. We have lost much - it is worth not having slaves in society, but if you would have had a slave (or servant) 150 years ago your life is worse in some ways for that progress and I want it back. I also want this affordable for everyone.
I'd recommend avoidance of references to "slavery" in this manner... It really just comes across wrong to likely anyone who knows what real slavery is like... Polarising language.
> It really just comes across wrong to likely anyone who knows what real slavery is like...
Let’s be fair, nobody here likely knows what slavery is first-hand. And even if 0.01% knows, so what? It’s an important debate. We’re talking about building robots with human-level intelligence. It’s worth discussing whether they’re going to be slaves.
"real slavery" covered a lot of different ground. The vast majority of slaves were hard labor - farm and mines being the largest use of slave labor. Only a small minority were "house servants" of the type I described, but that minority did exist.
Have you ever had ANYTHING break with google? Have you ever tried to get it fixed? Imagine your car just stops going into drive "because AI" with absolutely no way to fix it yourself.
I think what's funny about these videos is they aren't exactly dependent on advances in computing, but in display technology and network bandwidth. I've been annoying people for years by pointing out software is often the easiest part of all of this, and anyone doubting that should attempt to build a physical device, let alone a state of the art one.
Really the canary in this particular coal mine was when Silicon Graphics hit the wall in the late 90s. They were the epitome of pushing this dream that with visual computing everyone will be able to leverage their own intelligence that much more, leading to a better world, and it just doesn't seem to happen - we all just use it to distract each other instead. In the world today you have SGI level graphics in everything, and yet the applications of it are so mundane. For example, you don't see people doing finance visualizations in immersive 3D, we don't see people using VR for Minority Report style interfaces even though they are now viable, and even the Mac Finder is no longer spatial. Ultimately I think our culture has lost hope in the whole idea of visual intelligence.
May be true, but that coffee table is still something I want.
Given it existed 16 years ago why did the concept just get culled? We are now just left with boring coffee tables, photo sharing, music selection, movie picking, event planning, all stuck in the same model for the last 15 years.
Where are my smart glasses and HUDs?
We all have gone back to being excited over talking with a chatbot via text. Personally I would rather have some mates over having fun with the above table.
US industrial policy (IP;) is optimized for pharma-shaped industry, so that's what you get? Apple is investing years of nifty R&D to create an extremely difficult mass consumer market HUD, and far easier "niche" markets will be addressed afterwards, not before. Because patents. Maybe sufficiently long-term that's a win. Maybe not.
Absent mandatory FRAND licensing, patents don't play well with many-player small-market exploratory industrialization/commercialization. Pre-boom VR and large touch surfaces used the DIY dodge, "we're selling a kit, not a thing, so violations are on you, and of course you can't sell what you build". So you might then have bought a kit for a crufty coffee table surface, and adapted code from a we-cant-be-commercial small and scattered but active FOSS ecosystem. It all rather crimps market gestation. My understanding is the current trade war kicked off because pharma insisted on a "everything pharma wants, now" US position, and China considered that an intolerable new 'unequal treaty' (opium wars context).
Importation barriers don't help. And people trading incremental collective progress for moats and unicorn dreams. And just small-population "why was X never done?". As you accumulate market inefficiencies, eventually it all just doesn't fly? But mostly patents it seems to me.
I'm actually really happy with the progress made with wearables.
Smart watches are great, the Humane pin is cool ( massively overpriced considering you could probably give these out to anyone who prepays for a year of service).
I imagine the next great step will be AI powered pseudo code.
> I have never been so bored with the state of tech.
First, I agree that crypto and AI are much VC nonsense. Not so much make someone else money as consume it while dominating the conversation. Maybe AI will put customer support people out of a job. Congratulations VCs, that's what you were put on the planet for. Bravo, well done.
Otherwise, maybe tech is too good.
Wearables? Apple Watch and iPhone. I'm very happy with the MacBook Air that I'm typing this on. I'm quite happy with my EVs. When I need to see a doc, I send a message on MyChart. Chat with a friend, WhatsApp. When I need to look something up chances are pretty high that there's something on Wikipedia. When I want a book, I check it out online and pick it up at the library. Or I'll buy it on Alibris or Amazon. A nice late edition copy of my college calculus book cost $5 + shipping. I can travel anywhere and have a map and be able to pay for something with my credit card. When I want a quick tutorial and drills on Kyrie Irving's moves, it's on YouTube. Chess? I can play programs that would destroy Bobby Fischer.
As for systems and infra, I'm very happy with LLVM, Linux, FreeBSD, homebrew, .... Readily available consumer systems are infinitely fast compared to what we grew up with. No Apple IIgs for me, thank you Boomer.
My complaint? No late night haunts because everyone is online.
I'll grant that what isn't there is the sense of the new. But what is there is really, really good.
>I have never been so bored with the state of tech.
When a significant portion of the world's best funded tech companies (and now an increasing number of other companies for which digital technology is more peripheral) devote significant parts of their brains and money to leeching as much private data from people for the sake of advertising random garbage to them, or to locking people into some bullshit silo or another, it's not surprising if much of the web stagnates. Even when new things that look good come along, you quickly often see how full of hooks, suckers and catches they are for secretly extracting something from your life and activity.
I already use an apple watch ultra, and it has improved my life in many ways. Again though, everything feels so limited to the possibilities?
I have some ideas for social interaction apps that I am hoping to build, just the current choices of focusing on health tracking right now seems limiting to me.
At a gig, why can't I use my watch to find my mate? Why can't I play corporate wide pandemic zombie survival with my watch? Why can't my house auto-detect where I am and adjust thermostats accordingly? Why can't I unlock and open my house with my watch? Why can't I use my watch NFC to bump myself in at work?
I deeply wish that the apple watch was a general computing device.
I want maps (not directions, maps, dammit), I want direct sensor readouts, I want developers to be able to build whatever it is capable of without having to color inside of Apple's lines.
However, I suspect I'll see all that around the same time I get command line access on an ipad.
> just the current choices of focusing on health tracking right now seems limiting to me.
Anecdotally, my peer group uses it everyday to check how their last day was and use that info to regulate their activities today. Then discuss it with their peers everyday.
Meta is working on some of that stuff. But in the current economic era of profit over innovation / growth, those sentiments you miss are going to be dormant for a while.
The answer to that is right in your own comment. No one wanted that cumbersome demo ware crap. It had no utility and no value other than visual bling.
The reality is people want to live their lives freely, not tied to technology. This is something Apple understood under Ive so very well until Cook drove him out. Now I expect we will see Apple's first failure with Vision as HCI will demand a less constricting interface.
AI is likely that freeing engagement paradigm, and Ive is jumping onboard there.
When I was younger I would have agreed with you negatively, that apple hides the technology too much. But now I agree with you and think Ive and Jobs did get it on a product level. Tech should do something for you, and ideally you should not realize you are using technology.
Imagine you are a carpenter, and you build houses. You only have ever used a hammer and nails for nailing. One day you get a super fancy nail gun that you adjust pressure and tweak 1 million settings and it has a lazer sight and cloud connectivity etc you would probably hate it. But if you get a nail gun that is idiot proof and works well with no tweaks you would love it.
Technology exists to solve prolbems not for its own sake.
Yes, I think the difference is something like classic Apple viewed computers as things within a lived in external environment, classic Microsoft (Gates, at least) views the physical world as merely an inconvenience getting in the way of your connection with the metaverse, for want of a better term.
The big balance to make with ambient computing is to ensure the humans using it are the entities that have agency, and they don't merely give it up to the system. Arguably things like Tiktok, Facebook etc. demonstrate people already have.
There was a project in Japan called TRON, at least superficially a sort of academic/industrial operating system research effort, however, the leader used to write introductions to their annual conferences, and one of these contains a rant about why virtual reality is misguided and the internet of things is the way to go, and this is from like 1995.
> The reality is people want to live their lives freely, not tied to technology.
considering every one and their dog has a phone, the incredible rise of digital payments, more and more time being spent online, more people globally getting online, more expansive internet ever year, cars and their media systems, electric cars, all devices connected to the internet and so many other things i’m missing, i would qualify this statement as false.
While culture definitely touches everything and is a lens through which we look at the world and each other, there are some things that culture cannot bend to its will. I'm not arguing that there is an objective reality that we can access, but there are things in the universe that culture cannot make a dent in.
I had a conversation just this morning with Bard where it provided me with a culture slant that is also, shall we say, politically influenced, but in this particular instance I just wanted the raw facts without taking people feelings into account. It felt like it was lecturing me, which is very very off-putting.
>>I'm not arguing that there is an objective reality that we can access,
Isn't there? What would the light coming through James Webb be, then? Acceleration due to gravity? The non-repeating nature of Pi? Seems to me the universe is filled with objective realities, and it's our deteriorating social fabric that makes people cast doubt on that fact.
Oh, I am not skeptical of objective reality. The operative phrase is the second part, viz. ".... that we can access." Also don't interpret this as giving up, but rather than there are boundaries to what is accessible - while we can push the horizon further and further back, we can never be fully "outside" the system.
Is even a single one of these concrete enough to be verifiable? Is this what passes for hypotheses in our business?
If you make predictions, please make them concrete enough (actual numbers!) that we can tell in hindsight whether you were right or wrong. Without being able to judge someone's prediction track record, it's all useless for informing any real decision.
But he is the CTO at Amazon. All he can do is to signal about Amazon positioning and to influence to advance Amazon agenda. He can't publish any real predictions. For all we know he can actually think completely opposite things from what is published, for example.
I do appreciate that he linked previous years' prediction at the bottom. Predictions are still too vague to be falsifiable even with hindsight, but it's a start.
Well, he promoted remote learning in 2021 to "earn its place" in schools.
Reality: absolutely everywhere, remote learning has been a shit-show which no one, not the parents, nor the kids, nor the teachers, wants to ever repeat.
All other predictions in 2021 were lame and obvious continuations of already-old trends. Easy to get those at least not totally wrong. The "remote learning" one was new, clearly influenced by the pandemic, and thus a risky one. And he pulled an epic fail on that one.
Internet comments are always so polarized... Some people liked remote learning, some didn't. I know some college profs who were glad to stay home in -20C weather. Others hated it even then.
It's true that remote teaching didn't take over the world, but I doubt people's appreciation of it had anything to do with it. It just wasn't a long enough event to change anything permanently.
Yeah, sure, if we had been forced to continue remote teaching for a few more years, we'd probably still do it. But that's not exactly the definition of "earn its place": if you do something just because the alternative that you'd rather like to do is not possible.
Also it is not just about taste. Practically all scientific work investigating the large scale home schooling efforts concluded that it was far less effective than normal schooling in practically every measure, including tangential purposes of schools besides teaching, such as providing social context and stabilization and mitigation of socioeconomic differences.
I'm expecting a sobering year for AI in 2024, a result fueled by marketing gimmicks, closed training data and weights, false promises, chaos in recruiting, more tech layoffs due to economy (and false hopes for AI-powered workforce replacement), demotivation in the young workforce, etc.
Another reality hit for AI implementation will be the cost of generative AI API services and environmental impact of the process of training the models.
But I am not a Doctor, and CTO of big monopolist and I don't have a stake to sell you AI cloud services. So who knows..
I expect this to go down the same way as with the internet in 1990s and 2000s. Lots of potential followed by hype, big bubble with lots of bad ideas, crash, good ideas survive, wide adoption and then AI will be part of everybody's life.
Sorry to break it to you but AI is already part of your life. It might not be a chatty one but it is there invisible. Really gets me when people think that before LLM there was no AI just because they expect a stereotypical cyber butler/assistant (whether physical or virtual). I feel like AI is going to be like flight in the sense that what we thought human flight would be like is not gonna be anywhere close to reality.
On the topic of AI, you've brought up some valid points, and I understand the skepticism, especially given the AI hype cycle. However, there's also plenty to get excited about beyond the marketing gimmicks.
Consider the recent Google demo. It was recreated and, while still fun and impressive, it's definitely not ready for commercial use. You can see it at https://sagittarius.greg.technology. This demo is a preview of what's to come, and I think it's important for us to stay open-minded.
I believe that in 2024, we're going to witness a significant expansion in what's possible with AI, while costs decrease. Increased contributions in the open-source community will likely fuel another wave of startups and new applications.
My current reference class with AI today is 3D games in the 90s.
Doom has just turned 30; when it was introduced, it blew people's minds with the quality, and fuelled a moral panic about it being a "murder simulator". Every year or so after that, a new game would come out, be hailed by the press as "photorealistic", and then forgotten as the next in the cycle replaced it… but even then, they're called "Doom clones", and the original doesn't disappear.
I even remember one of the magazines bemoaning that Riven was pre-rendered rather than real-time "given Quake proves the PowerPC chip can handle polygon-based gaming" (or something close to that quote).
I think we're in a similar phase with AI: ChatGPT-3.5 blew people's minds with the quality, and fuelled a moral panic about it being a fully automated plagiarism/cheating engine, we keep getting news stories about new models or ways to get much more out of any model, which are then forgotten as the next hot thing repeats the cycle… but even then, they're called "ChatGPT clones", and the original doesn't disappear.
It's only an analogy; I don't know how long we have to wait before AI becomes an almost unnoticeable enhancement of everything in the kind of way CGI did.
That Corey Quinn article is great, thanks for sharing. It's a more thorough mirror of my own experience with Q. After seeing the obnoxious popups all over the AWS docs I tested it out with a few simple questions about AWS services.
My main takeaways were
1. It refuses to answer a lot of simple questions.
2. It provides a lot of factually incorrect answers.
I didn't even try any complicated or subtle questions; it failed at simple factual asks.
So many companies are rolling out obviously half-baked and useles Gen AI features that I would be embarrassed to release. All of these terrible user experiences must be damaging some company reputations. The only justification I can see is that many large corporations care more about being able to tell investors and journalists that they have checked the box on having the latest shiny trend than they do about building actually useful tools and user experiences.
I expect a sobering year for illustrators [1], animators [2], musicians [3], game designers, publishers, and news outlets [4] as GenAI starts to make their fields accessible to everyone.
The prevailing opinion on HN is to sleep on this like crypto, which is baffling to me. The outputs are so good and are leading to entirely new classes of products, not to mention orders of magnitude reduction in time and cost structures.
This is the most exciting moment in tech of the last 30 years, yet there's so much "nobody would use Dropbox" and "old man yells at cloud" negative forecasting.
Entire industries are going to be disrupted by this. Y'all are sleeping on it.
Comparing generative AI favorably to crypto is an odd choice, given that crypto and blockchain are the poster children for "massively overhyped tech that led to absolutely nothing."
I'm comparing HN attitudes. People dismiss AI as if it were crypto.
> that led to absolutely nothing.
This is the part most of you are completely blind to. It's astonishing to me that you don't see the step function changes happening. I guess that means less competition, though.
By the end of the year, anyone will be able to make brand new Taylor Swift music that sounds good. The average child will be able to animate short films.
This is insane. Both the fact that sci-fi possibilities are tangibly right ahead of us, and the fact that so many people are dismissing it outright.
You know you can make Taylor Swift music you know? Youtube has plenty of tutorials and the web has plenty of DAWs you can use that require no training that you cannot get on the internet. The sounds "good" bit has nothing to do with technology imo. If you have no skill, you won't be able to make good sounding Taylor Swift music but you will be able to make another entity make it for you. You don't need AI for this. You could contract the services of someone to do it, you could ask the Switfies communities to do it. You don't need AI for this.
The only difference is that you will now be able to press a button and have something that resembles Swift music but I highly doubt you will be able to steer the output through the multidimensional output space towards something that falls within "good sounding Swift music* if you yourself have zero musical knowledge. You could still end up there randomly though. But what are the chances? Fairly low, I would say.
Short films is the same.
You are looking at music and films the same way a random biz manager looks at programmers when he says "I will give you a magical glove that enables you to type faster so you can complete your work much faster".
What I think this tech will be good at is prototyping for people that already have the skills or knowledge of what the final product should look like
The problem is that while those things might be true, their impact might be quite limited. Will all the kids do their own short films? Will people stop listening to Taylor Swift?
More "trinkets and gadgets" is a continuation of the last decade, the real question is if things remain incremental (maybe up to a percent more GDP growth) or become truly transformational. Claims to the later do invite scrutiny.
> By the end of the year, anyone will be able to make brand new Taylor Swift music that sounds good. The average child will be able to animate short films.
But will I be able to listen to that music and watch those films in my fully autonomous vehicle which will be available in 2016... I mean 2020... I mean 2025... I mean 2040?
> But will I be able to listen to that music and watch those films in my fully autonomous vehicle which will be available in 2016... I mean 2020... I mean 2025... I mean 2040?
Despite Waymo's current state, I don't think autonomous cars will be solved or deployed widely anytime soon.
I do think Hollywood is about to be toast. Movies, music, games (non-engine, non-gameplay aspects), narrative fiction, etc.
These techniques will impact different fields in different ways. Some will be outright washed over and remade entirely.
You're missing the point of comparison -- fully autonomous vehicles have been a year away for 15 years now. Much like generative AI will be a year away from making artists obsolete for the next 15 years.
Forget replacing Hollywood and the game industry. Show me one hit movie or game that was generated by AI. Show me a hit song that was generated by AI. Show me a widely beloved novel, or even a short story, that was written by an LLM. A single one! Don't tell me the entire industry is going to die off at some indeterminate point in the future if you can't show a single example!
Until GenAI is good enough, there will be zero examples.
But the moment it becomes good enough, it's game over for everyone in that field of work — at current gpt-4-1106-preview prices, it's cheaper to generate a complete new novel from scratch than to buy a single copy of a paperback in a closing-down sale.
There's no intermediate state where the AI and the humans are both competitive on the marketplace, just an adjustment period where businesses and employees don't yet realise they're redundant. And an endless tide of mediocrity overwhelming the publishers until then because the cost of creation is so low.
And that transition… is kinda already happening with image generation, where the AI temporarily wins awards until the awarding body decides AI aren't allowed to.
The impression I have is that artists who like their role turning into "editor" seem to be fine with this, the ones who actually enjoyed coming up with novel art from scratch seem to resent it.
(It's entirely possible it will take ages to get good enough in other domains, but the evidence you're asking for will only come when it's too late to matter).
> But the moment it becomes good enough, it's game over for everyone in that field of work
HN really fails to see this. The writing is on the wall. But HN keeps writing about their techno utopia fantasies very reminiscent of the comic strip about the nerd who thought that having a super strong encryption would somehow keep his data totally safe (hint: it didn't). The reality is likely to be that most people will be unemployed and then re-employed into low-value min-wage soul-destroying dead-end jobs while all the profits from AI will go towards the people who financed the very development of AI and then used dirty tactics to destroy small companies from the market (see AI 1%rs).
There was a story a while back about a society where ever more jobs are done by AI, eventually including the police. In such a situation (and in the story) it would be cheap enough to keep all the unemployed people housed and fed with all that automation; in the story the people have no freedom, the robots keep them from going too far from their free apartment. That could totally happen, even in our various current governments in the real world, leaders can be very divorced from the lives of those at the bottom — "let them eat cake" may not have really come from Marie Antoinette, but the cliché exists for a reason.
I didn't like the twist for the final chapter where the lead character was rescued by a Deus Ex Machina in the form of winning a surprise two-person ticket to a fully automated UBI trust fund based in Australia, but my problem was with the tonal shift of the story rather than plausibility given the prior world building. Likewise in reality, while the cliché "a rising tide lifts all boats" is often wrong because of the metaphorical equivalent of many not being maintained in a seaworthy state, technological advances do change the lives of even the poorest. My 18 month old iPhone SE can run some Stable Diffusion models locally, in addition to being a near-real-time audio and visual translator, a route finder, and having some limited healthcare functions. The wealth will concentrate unless governments take special care to prevent that, but it takes active malice (which can and does happen) to prevent many of the benefits from reaching even the worst-off.
The twist was there to show an alternative future to the techno-dystopia. I think the author now believes that we’re merely waiting for the latent effects of climate change to destroy civilization, so I doubt he’s worried about an AI takeover these days.
The "AI" Drake/The Weeknd song was not created by AI -- it used AI voice synthesis to recreate Drake/The Weeknd's vocals on top of a human-created song. If you're considering that to be AI-generated, I would point you to the many songs using vocals by Hatsune Miku which have been made for the past 15+ years to say that this is nothing new.
The rest of the stuff you've created is meme content, which while funny, is not a serious threat to Hollywood.
> anyone will be able to make a brand new Taylor Swift
I was with you until this. I think music is actually informative for what we can expect in visual arts and generative models generally; synths, drum machines, and music production equipment and software share very similar qualities with this new stuff, yet we still have rockstars like Swift.
Vocals are solved [1] and the workflows are getting better and better [2].
AI influencers and celebs are getting big [3], not to mention VTubers such as CodeMiko [4] and Hololive [5] are huge and increasingly AI-powered.
People are absolutely willing to be addicted to following virtual personalities and artists. Gorillaz predate all of this AI stuff and they're pretty big.
I know of all that, hell I was listening to Plastic Beach when it came out. My point is that EDM and modern Pop coexist with older forms of music. I don't think new ways of creating art are zero-sum, they create new demand.
But we don't need more Taylor Swift music. We especially don't need more "I can't believe it's not Taylor Swift" music which is more realistically what the AIs will get you.
If you could make (metaphorically) more Taylor Swifts that might be interesting. But it doesn't do that.
> I expect a sobering year for illustrators [1], animators [2], musicians [3], game designers, publishers, and news outlets [4] as GenAI starts to make their fields accessible to everyone.
That's such a privileged and insensitive thing to say imo. The careers of all these people are at risk, and no, this is no industrial revolution, there is no replacement for those people unless they decide to become LLM researchers/devs and join the LLM train.
I wonder if you said the same thing if your job got made redundant because it's not needed anymore and you need to spend +5 years retraining without having the funds or economical support to do so.
I don't think HN is sleeping on generative AI - it's been fairly prevalent near the top of the front page.
I agree that some creative fields are going to start feeling real impacts, not necessarily because models have gotten super creative, but they're becoming more practical for rote tasks which otherwise might "pay the bills." For example, Google's recent work on style alignment: https://style-aligned-gen.github.io/ .
I used similar workflow in a SD with Control Net/Depth mapping in the summer.
The key factor will be creating a distinctive style.
You will need an experienced artist for this. Limitation of the dataset. With this technology prevalent, designers and artist will stop sharing publicly their work due to fear of AI giants stealing it.
I expect serious legislation in place for generative visual AI. Copyright, etc.
When I first learned about Bitcoin, I thought "this is interesting, but the person telling me this is using a 'numbers go up' graph to try to convince me to buy. They specifically, irregardless of the technology, smell like scammers".
(That was at $250/BTC, so of course I have a certain level of regret from that which may cloud my thinking…)
AI? I've been interested in it since the late 90s when I first learned that ANNs were an actual thing one could program and which could learn for themselves, but that just means I'm disappointed by seeing obvious scammers and grifters trying to make a quick buck from what I regard as a legit tech — and because I can see the bad actors, I can sympathise with people who want to sit it out, even though I won't.
Sorry, "shamed" is a bit hyperbolic. But I was more or less told on one of the sysadmin subreddits that there is no pride to be taken in work accomplished with the assistance of LLM's, even if that work is good.
Since when accessible tools are prerequisite for quality? Everybody can buy a pen or brushes.
As an artist, designer and developer, I am more aware of generative AI than the average "prompter" who the startups are using for fine-tuning of the models and pays a hefty price for the "privilege".
A good illustrator with AI and knowledge of composition, color, anatomy and real drawing skills will kick the B out of "everyone" who is prompting aspect ratios and the names of artists which work is represented/stolen in the data set.
The big names in the business will jump to this "novelty" out of hype and cost optimization, only to realize that the results are limited by the dataset and the shiny average of styles actually cannot give serious brand differentiation.
But hey, keep prompting, sorry, creating. You are not a part of the biggest Dunning-Kruger experiment to date. You are an artist, designer, composer, programmer and business owner. Good Luck.:)
This is, in my humble opinion, a view based in fear, jealousy, and/or gatekeeping.
Gating on accessibility ignores so many rough edges people experience. Opportunity cost, activation energy, learning gradients, community. We should want for people to encounter fewer stumbling blocks. Life is already complicated enough.
Not everyone has the luxury of being able to spend 10,000 hours to manipulate a graphite tip on paper. Everyone has a human experience. Everyone lives a life of joy and pain. Everyone daydreams. And everyone can communicate feelings to others. Art is just one playful encoding of that. A specialized encoding with difficult runes that take time to learn.
Asking someone to stick to pen and brushes is like trying to keep reading as a priestly Church ceremony.
Making art should be as easy as playing a video game or telling a story to friends. It doesn't matter how you made it. It only matters how it made someone feel.
More than anything, I think the artist will be the biggest beneficiary of these tools. They'll be able to do more with a smaller budget and fewer people. They'll be able to learn and experiment faster, and take their work down entirely new paths that were previously blocked.
Webcomic and fanfiction authors will be film directors. Just give it time.
Think about it in this way. Would be happy to take a 40% hit on your salary forever if it meant that your career was more accessible to everyone? Would you be happy to live in a Tokyo style tiny flat if it meant that homelessness in your city was resolved?
Comparing Formula One pilot to passenger in an automated car which is driving on a F1 track?
Working on your craft and learning with practice is "a luxury"? Seriously?
Art is encoding? People, start reading books. Your mind is trapped in a predictive and programmed pattern of consumerism and wishful thinking.
When your tool is painting/composing/writing instead of you, you are part of the process, not master of the process. But go ahead, sleep into AI induced coma. It will be fun.
For sure.:)
Soon you will be into this possible UX:
Your dream will be recorded and shared with the corporate AI.
Because art is personal expression of emotion and meaning, and you are the ultimate AI artist, it is logical that you want a validation of your own beliefs more than watching others experiences.
So you will put your VR headset to think about entertainment and on the fly the corporate AI will generate a compelling story with music and visuals of your taste and liking. After all, everything is encoding. And people before you produced every possible scenario. LLMs hallucinate, which is artistic creativity after all. Right?
Another AI will charge your Social Rating a "small" amount of "carbon credits" to motivate you more and give you participations points for watching advertisements and propaganda.
You think this is a joke? Think again. The tech is here, the users are ready for this ultimate experience. The VS's are willing to invest.
Sure, in the beginning, some will share their dreams just to have a social validation dopamine rush. But with time, the AI audience will be more user-friendly and will consider your deepest fears of rejection and complexity of your digital self.
So you will have it all. Without any pressure, work or 10 000 hours of pushing your self towards a goal.
Just consume your dreams synthesized through the network and call it a day.
imo what is likely to happen in the short-term is that the bottom-end of the people in that field will see their salaries significantly reduced as the GenAI can do most of it at a much cheaper price. Doesn't even have to be as good, especially when it doesn't need to sleep, have breaks or raises to perform to the same or better standard. Some will accept the lower salary and suffer in silence, others will leave the field only to see many other fields experiencing the same phenomenon and thus still ending up in a low paid role somewhere anyway, while others will skill up to basically get away from the rising tide, this will increase the supply of skilled workers and bring salaries down for them all as a result of this and the fact that GenAI has already eaten a chunk of their work.
Summary: everyone apart from the 1 %ers gets lower salaries
I don’t think one have to be either to see that the future will mean increased choices and speed because of AI. Codegen AI tools may not do everything a developer do but I see it certainly evolving to a point where it can “understand” the context of a large code base and generate useful snippets of code as per the desires of the product owner. Yeah “development” as we know it may become obsolete but it would probably be two decades from now.
Today's answer is that the federal reserve has announced that they feel recently high interest rates have achieved their goals and even see a possibility of rate decreases.
That answers the first part, not the second. What force would push tech companies to accelerate their pace of innovation, necessitating a larger workforce?
I'm not seeing a surge of customers demanding new essential features on the horizon. We've gone through several bubbles with no deflation over the past few years.
I expect big companies can keep up their pace of innovation while continuing to layoff through efficiency/consolidation/depreciation efforts (which shareholders will love). Small tech companies will get closer to the norm of small-businesses, needing an efficient and effective business strategy for funding - there will be fewer new players, each running tighter ships. Zombie start-ups will continue folding over several years, freeing up more labor.
What are the systemic signs pointing to a need for more labor within the tech industry?
Another probable bad side effect of AI will be that a generation of developers will become dependent on copilot/chatgpt to be able to write code! They will definitely perform worse than before llm era without the help of a llm!
In a lot of ways, yeah. Autocomplete is an absolute game changer and one of the biggest reasons that static typing came back in vogue and crushed dynamic languages. Massive productivity gains when you can ctrl+space and see all of the methods available on a variable, or use a hotkey to figure out if the method you want to call is `do_foo_bar` or `do_foo_and_bar`.
I'm not sure I buy that argument about dynamic languages. You are probably right in that auto complete played a role, but for me working on a pure node js app back in 2014ish, it was a nightmare of runtime issues that type checking would have eliminated. The productivity gain in a typed language was not having to write unit tests for things that static typing does for free.
I agree with you, autocomplete definitely isn't the only reason for static typing making a comeback and dynamic language popularity dying off (and the big legacy names like Javascript and Python have Typescript and mypy/pyright to transform them into a facsimile of one). I think a lot of it has to do with maturity, where a lot of "move fast and break things" devs who loved untyped Node/PHP/Python got burned by bugs and impossible to comprehend legacy codebases and slowly realized the value of static typing. But that type of understanding takes years. A novice programmer can see the value of autocomplete and not having to check the docs or stackoverflow for the name of a method pretty much instantly.
autocomplete accurately took care of a lot of the boilerplate; that low-hanging fruit has been picked. AI is not doing the same IME, the pay-off has been a lot slower coming, and sometimes it's making the work more painful.
I do agree. It's hard to get an unbiased assessment of how many developers are actually using generative AI (copilot/chatgpt) for work, and in what capacity they do use them. Anecdotally, at my org of ~40 devs, we encouraged everyone to try out Copilot and let us know if they wanted a full license; only 2 people took up the offer, and they use it either for generating unit tests or translating English data logic to pandas syntax (which it does seem quite good at!)
This argument rears its head every time the state of the art in developer tools makes a credible threat to improve.
And yet relatively few developers use C, assembly, or machine language as their daily drivers. I've even met accomplished developers who don't know how to write a compiler!
It's fine for developers to become dependent on tooling that makes them more efficient. This is a good thing, and we want more of this!
I think it's a bit different discussion. I'm 100% agree to use any tools including LLMs, but LLMs may cause not good affect for ecosystem.
New programming language, syntax, and API don't get LLMs support until they learned it (or given as a context). And perhaps new things less being used because of LLMs don't use it. Less usage in the wild means LLMs learn less from the nature. I wish it doesn't stop evolving programming (and other things).
You're going further to "whether LLMs can be useful when programming in a new programming language." Which I will leave to developers using those new languages.
I'm saying more that every time developers develop/find a tool that makes them more productive, if it doesn't look like the Old Ways, there will be other developers to crap on them and say they don't know enough about the Old Ways. Which may or not be true but isn't always relevant given the size of the space that comprises modern programming.
This is why there are still people who deride programmers who use IDEs instead of using tools that were invented nearly 50(!) years ago. I'm not saying either way is a fit for every use, but the attitude that developers should eschew attempts to improve the state of our tools is toxic.
That's probably what everyone said when auto-complete was invented and IDEs were used. The actual code written in most cases isn't so important, the architecture, experience in building systems and debugging skills across a whole stack are what counts.
But when would they not have the help of an LLM? We are all dependent on the tools we use. That's not a bad thing.
The biggest risk is that it perhaps becomes more difficult to sift out bad developers from good ones, because LLMs let bad developers "cheat" more easily. But that's not a new problem, just a new era of the same problem.
If the best coding LLMs end up monopolized as much as most areas of the tech industry it will be very easy to lose access. Take mobile app development for example. If you piss off the wrong person or algorithm at Apple or Google then poof, half the market will evaporate for you.
Corporate LLMs have and will monitor every single prompt entered and if the company doesn't like you for any reason, related or unrelated to your coding they will ban your access with no recourse.
If open source LLMs are able to compete with the state of the art long term then I'm a lot more optimistic, but I'm skeptical they will be able to keep pace given the immense technical, financial, and lobbying resources the big players are certain to invest in the coming years.
> Unburdened by the undifferentiated heavy lifting of tasks like upgrading Java versions, developers can focus on the creative work that drives innovation.
I totally agree that AI is going to make it easier and easier for engineers to get closer to the customer and work on real problems. The number of specialist "IT" people, or separate backend and frontend classifications, will decrease and the startup and product world will greatly benefit.
Ok literally none of his predictions have any merit. They are absolutely bullshit. My bullshit detector has not gone higher this month. No wonder he is the CTO of Amazon. All of these companies are on their downhill dump.
Yeah, there is always the high-level view of these things and the low-level.
The way LLMs make stuff up is oft-remarked upon and is unclear whether it's a dealbreaker or not but it might be for any/all of this to come to pass.
Here's an Werner/AWS-specific LLM anecdote along those lines:
I went onto AWS's LLM Q for the first time today, a little chat icon nicely/helpfully linked from their documentation pages (when logged into my AWS account) and asked it a question which I was unable to find an answer in their docs or via Google/Bing/StackOverflow after 5-10 minutes of searching.
What Redshift SQL query can I write to tell whether the instance I am connected to is a Redshift Serverless or a Redshift Provisioned cluster?
(I have migrated Redshift clusters in Dev/Nonprod to Serverless to reduce cost, but featureset is not identical so some of our app code is sensitive to the differences and needs to detect which type of environment is present before executing. The release number, but not the instance type, can be found in Redshift SELECT VERSION();.)
AWS Q LLM replies:
To check if an Amazon Redshift cluster is provisioned or serverless, you can run the following SQL query:
SELECT cluster_type FROM svv_cluster_info;
The cluster_type will return either "provisioned" or "serverless".
You can only connect to and query Serverless clusters using the Amazon Redshift Query Editor v2. Provisioned clusters can be queried using either Query Editor v1 or v2.
[...+more text on the differences and a bulleted list of 3 weblinks/sources provided...]
Wow! Awesome! This answered my question perfectly! The response and code is formatted beautifully just like real documentation. There are a lot of SVV_ system views and I guess I just missed this one. I am at this point very impressed.
I go to actually try the above query on my serverless and provisioned clusters...
Serverless: SQL Error [42P01]: ERROR: relation "svv_cluster_info" does not exist
Provisioned: SQL Error [42P01]: ERROR: relation "svv_cluster_info" does not exist
Aww, man!
Another attempt answered similarly affirmatively, this time mentioning an imaginary "SVV_CLUSTER" view (which also didn't work.) So I don't think this is just some "unreleased future feature" leaked to the LLM via its training set of internal documentation (wouldn't that be nice! err, kinda...), since even the SVV_ view name isn't the same when you ask a similar question at different times.
Even as a chatbot, the most obvious use case, this is is not really meeting the mark.
I know it's brand new and rushed out of the door for re:Invent two weeks ago and I know it'll get better... but it was a bit hard to read Werner's article a few hours later and take it seriously. My first Bayesian prior is now a "err, no".
Yes, earlier years were a bit better, but still. As CTO of Amazon he also needs to push the party line. These "predictions" are signaling, not actual predictions, privately he can think completely different things.
this all sounds great for rich people and like hell for everyone else. AI learns that brown people exist, women become data farms, clippy is your new boss, children are our future profits.
There is some kind of satire in here about how tech leaders and QAnon folks want the same things but take opposing routes to get there. Tech wants to say the N-word, get inside women’s private parts, devalue knowledge work, and indoctrinate children. But for liberalism…
They train their AI on some false assumptions that fits their political faith, and foreigners who need to use their AIs will have to learn about the biases and work around them. Just like it is today.