Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Side note, but I hate that we're moving to a world where coding costs a subscription. I fell in love with coding because I could take my dad's old Thinkpad, install Linux for free - fire up Emacs and start hacking without an internet connection.

We're truly building walls everywhere.



It doesn't though. You can still code the old fashioned way, and you are even likely to become a better programmer for it.

Personally, I tried copilot when I got it for free as a student and it didnt make a difference. The reason I know is that I was coding on two devices, one which had copilot installed the other didnt, and I didnt care enough to install it on the latter through an entire semester.

Its just slightly better autocomplete, by a questionable standard of "better".


I agree with your overall point, but Cursor's autocomplete is significantly betters than copilots.


Then don't use autocomplete.


Don’t worry.

There’s literally nothing an llm can write or tell you that you can’t write yourself or find in a manual somewhere.


> There’s literally nothing an llm can write or tell you that you can’t write yourself or find in a manual somewhere.

That's like saying, there's literally nothing a service business can do for you that you can't do yourself. It's only true in a theoretical sense, if neither time nor resources are a constraint.

In such hypothetical universe, you don't need a dentist - you only need to spend 5+ years in medical school + whatever extra it takes to become proficient with tools dentists use + whatever money it takes to buy that equipment. You also don't need accountants, lawyers, hairdressers, or construction companies. You can always learn this stuff and do it yourself better!

Truth is, time and attention is finite. Meanwhile, SOTA LLMs are cheap as dirt, they can do pretty much anything that involves text, and do it at the level of a mediocre specialist - i.e. they're literally better than you at anything except the few things you happen to be experienced in. Not perfect, by no means error-free - just better than you. I feel this still hasn't sunk in for most people.


> That's like saying, there's literally nothing a service business can do for you that you can't do yourself.

No, it’s not. You’re making my statement abstract for the sake of arguing.

I’m not a cook, doctor, or a lawyer. I can’t prepare meals for a party of more than 2.

I can’t perform surgery.

I can’t effectively defend myself in a court of law.

I (and I assume OP) have programming expertise.

I can write exactly all code an llm could write.

For simple scripts, demos and other easily Googleable tasks, LLMs will be faster, but it’s nothing out of reach for me.

These tools won’t force you to pay a subscription to code. You don’t need them if you already have experience.


> No, it’s not. You’re making my statement abstract for the sake of arguing.

> I’m not a cook, doctor, or a lawyer. I can’t prepare meals for a party of more than 2.

They are demonstrating how over-broad your own statement was with an *equivalent* statement to show how it only passes on an unhelpful technicality.

Immediately after your quotation is this:

> you only need to spend 5+ years in medical school + whatever extra it takes to become proficient

LLMs pass the bar exam and the medical exam. These are things which I assume I would be able to do myself if only I were willing to dedicate 5 years of my life to each.

> I can write exactly all code an llm could write.

I can often see many errors in the code that ChatGPT produces. Within my domain, it's just a speed-up, a first draft I have to fix. Outside my domain, it knows what I *can't* Google because I've never heard the keyword that would allow me to.

On legal questions, ChatGPT (despite passing the bar exam) seems to make up cases. I belive this because I can google the cases and fail to find them. Is this because they don't exist, or because they're not indexed on Google? I don't have the legal background necessary to know — and it would take me years to get the knowledge necessary to differentiate "it's worse than first glance" from "it's better than second glance".


Of course a memorization machine can pass exams that are mostly based on memorization.


Even if LLMs were "a memorization machine" (they're bad at that), the statement is obviously false because actutal literal memorisation machines (books, video recorders, Google) cannot pass these exams.

LLMs only started to because they could follow the questions.

But even that aside, it doesn't matter why LLMs can do what they can do or what else can also do that, what matters is that it would take most humans several years to get to the level of current LLMs in a subject that human isn't already familiar with.


No. Just what, two years ago, no one was really using LLMs to do this sort of thing? Why do we suddenly have to use these tools now?

If your job is to write software, then you're the accountant or lawyer or doctor. Otherwise, what do we even bring to the table?


> they're literally better than you at anything except the few things you happen to be experienced in

Like speaking english and coding?


I don't know how you compare, but ChatGPT's English grammar and vocabulary are significantly better than mine. And when I prompt it appropriately, it also seems to be a better creative writer than I am, at least for short pieces.


> it also seems to be a better creative writer than I am, at least for short pieces

Don’t be so hard on yourself.

Chatgpt (and other LLMs) are awful at creative prose.


> Chatgpt (and other LLMs) are awful at creative prose.

As are most humans.

Don't get me wrong, what I've seen from even the better LLMs have a certain voice and tropes and sacherine worldview that isn't dark enough where it needs to be for the story to work; but on the other hand, what I see on some fiction writing subreddits… the AI is often a genuine improvement over amateur writers, even in cases where the AI contradicts itself about plot elements.

Which is frustrating, because I have the feeling the novel I've been trying to finish writing for the last decade may be usurped by AI before I get my final draft.


> what I see on some fiction writing subreddits… the AI is often a genuine improvement over amateur writers

What point are you trying to make here? That amateur writers are amateurs? That AI is only "often" an improvement over an amateur?

> Which is frustrating, because I have the feeling the novel I've been trying to finish writing for the last decade may be usurped by AI before I get my final draft.

This statement shows such a warped attitude towards art and the creative process. What do you mean "usurped?" Do you actually believe that LLMs will overtake humans when it comes to creative works?

If so, you don't really understand what is compelling about the written word or what makes for good writing and reading and it's no wonder you feel as though your own writing is so substandard.

I highly doubt your writing is that bad. Especially if you've been working on it for a decade.


Just so we're clear, you're not sure what I'm saying and nevertheless think that I'm a better writer than you believe that I believe myself to be and you are confused by an imaginary "only" that wasn't in what I wrote and that you are skeptical of the idea that an LLM might overtake humans in creative works generally and writing in particular?

I'm not sure if the following statement will help your confusion, but most who judge the quality of a story do so without being able to write that story. Critiquing and writing are different skills.


Except it has no freaking idea what it's doing, that's the difference.


It's giving a reply that "sounds right to a human" in context. Which, on small scale, is exactly what they, me and you are also doing when writing (or speaking), except for the infrequent cases when we force ourselves to reason through stuff very. slowly.

(This is why I believe LLM performance is best judged against human inner voice/system 1 reasoning, not the entirety of human thinking. When thinking with system 1, people don't really have an idea what they're doing either - they're just doing stuff that feels right.)

Also note that "sounds right to a human" is literally the loss function on which LLMs are trained, so between heaps of training inputs and subsequent extensive RLHF, the process is by its very construction aiming optimizing for above-human-average performance across the board.


No, when I'm talking to someone I'm generally not randomly associating and dumping whatever I come up with, I typically have something I want to say.


Although, tbf, some libraries are documented better than others.

Also, local llms with an agentic tool can be a lot of fun to quickly prototype things. Quality can be hit or miss.

Hopefully the work trickles down to local models long-term.


And you think an llm can generate code to use an undocumented library? :D


Even documented libraries can be a struggle, especially if they are not particularly popular. I'm doing a project with WiFi/LoRa/MQTT on an ESP32. The WiFi code was fairly decent, but the MQTT and especially LoRa library code was nearly useless.


Sonnet 3.5 fails to generate basic JetpackCompose libraries properties properly. Maybe if somebody tried really hard to scrape all the documentation and force feed it, then it could work. But i don't if there are examples of this. Like general LLM, but with complete Android/Kotlin pushed into it to fix the synapses.


Of course, why wouldn't it? It's a generative model, not a lookup table. Show it the library headers, and it'll give you decent results.

Obviously, if the library or code using it weren't part of the training data, and you don't supply either in the context of your request, then it won't generate valid code for it. But that's not LLM's fault.


> not a lookup table

You can imagine the classic attention mechanism as a lookup table, actually.

Transformers are layers and layers and layers of lookup tables.


If there are open source projects that use said library, then probably yes.


Unless they are not hosted on github, then no :D


Even if you absolutely have to use an LLM for some reason, there are already perfectly good LLMs for code generation that you can comfortably run on commodity hardware.


Yeah the deepseek models are actually pretty solid.

I use that with avante.vim for tedious refactors. All local.


You can run it all locally: https://github.com/ggml-org/llama.vscode


This is a free tool (though I wouldn't use it since its from Bytedance). Also you could have an AI powered IDE locally without a subscription.


For now. It’s not clear what the monetisation strategy is, but probably it will be paid in future (alongside whatever other strategies they may have, like selling data, etc)


> world where coding costs a subscription

I think you are approaching this with the wrong mindset. I see it as I'm paying somebody to type and document for me. If you treat LLMs like a power tool, it is very easy to do a cost benefit analysis.


That is what happens when developers want to be paid for their work, but refuse to pay for the tools they use, regardless of little they may cost.

So we're going back to the last century, but given we are in a different computing context, only the stuff that can be gated via digital stores, or Web Services, gets to have a way to force people to pay.


You are not alone. The only future for these sorts of AI coding helpers is for them to use 100% free software AIs. On the bright side good progress is being made in that area and the main sticking point seems to be the expensive hardware to run them on (and integration). Costs on that hardware will hopefully drop over time so they won't still be mostly limited to 1st world (like the subscriptions).


It will fail epically as always with these morons, let's hope some of us still feel like helping out once the coin drops.


That's not going away at all though.

But I am glad we now have more paid options available. Tooling is important and people that do good work should be able to charge for high quality tools.

I would be much happier in a world full of tools licensed like Sublime Text, where I can purchase a license and just run it without the need to constantly phone home though.


Use an aggregator like nano-gpt.com. You get access to all the top models (including o1 pro which usually requires a $200/month subscription) on a pay per use rate. Short on cash ? Use deepseek models for .1cents.


"install Linux for free - fire up Emacs and start hacking without an internet connection." that still works you know. Nobody is forcing you down this subscription path (except Microsoft)


No one is forcing you to subscribe. You can code the old fashion way, if you wish to use AI, you can run your own local model.


VIM and self-hosted ollama for free?

Nothing stopping you to build the world you want really.


Just get a graphics card and run a prompt-compatible llm yourself. Recent models like phi-4 show decent results (relative to your general amazement baseline) even on medium quantization. I’m running q4_k_m (8gb) with custom “just print and stfu” characters and rarely reach Claude anymore.


This is an age where you can write your own LLM extension.

There's no moat, all the clever prompting tricks Cursor et al. are just that - there is no secret sauce besides the model at the other end.

Complexity isn't an issue either, have the model write the interface to itself.


> I fell in love with coding because I could take my dad's old Thinkpad, install Linux for free - fire up Emacs and start hacking without an internet connection.

I'm not understanding what it is about a private company launching a product that changes that?


It changes that others aren't going to be learning coding in the same, purist way.


I think that world exists already. I've been paying for JetBrains licences for years because their value is greater than their cost.

You can do it without IDEs, nothing is stopping you. I don't think this is a new phenomenon though.


>>"coding costs a subscription."

You are free to coding without spend a dime, these AI dev tool cost money because these LLM cost money to run

You can get the same experience with open source tools that you can run your own model on your pc


>but I hate that we're moving to a world where coding costs a subscription

I mean you don't need to if you don't want to. I am gainfully employed as a software developer and what I do everyday is literally just fire up Emacs on my Linux machine and write code. To this day I haven't figured out what llms are supposed to do that a bunch of yasnippets don't.

Just like five years ago most of my day is reading and debugging code, I'm not limited by how fast I can type.


You're definitely limited by how fast you can read and understand.


True, but the jury is still out whether text generators help out with either of those.


Understand probably not, but they can read bizarrely fast and write a summary of what things do very fast ; maybe not 100% accurately but close enough to be of value when trying to understand a large bag of code fast.


The jury which only has you as a member, obviously.


Is there some alternative, though? Using an LLM might speed up writing code, but obviously can't speed up reading and interpreting it.


It's much worse, I doubt they created and published this IDE for profit, they want people's data.


And a world, where you cannot be sure who has access to your source code (or even to your systems).


Some AI IDE allow you to use local models. If compute getting cheaper this will become a norm


Vs code is free. And copilot has a free tier.


And you still can. What are you on about.


I've been coding for around 25 years, I have 3 or so subscriptions to different AI products that I use for coding.

It is kind of terrifying that I probably would stop coding for the day if those subscriptions end. (I get far too much convenience out of them)

I have tried to rationalize it by the fact that I do pay for internet, and version control, and my peripherals etc


You should check out openrouter if you haven't already.


It's been a little janky for me - usually get better results from directly using a provider's API.


Was going to say it's mostly been great for me. At least until recently trying to try out deepseek r1 it's not been great. I can't tell if it's the router or the provider..


You can still do like that.

The problem is that coding was a passion, but turned out to be very lucrative profession so loads of people who can't do it want to do it.

This is why we have languages like Go, and AI tools: allow people who don't want to learn how to be developers, to get a job as developers.


And here I thought those are all using Python, JS and Ruby.


I don't think anyone's using ruby.


It will be free soon enough.


$20/mo isn't a lot... Especially if you make money with coding.


If you pay out of pocket it means it's not an approved tool by your company, which means you can be fired and sued for leaking their intellectual property.

Also 20$ per month is way less than what it costs them to run it. Eventually they will need to charge way more to cover their costs, and the people who can't code without an AI assistant will need to pony up :)


The coding IDEs at $20/mo do not result in your IP being shared with the AI provider for training.

That only applies to regular ChatGPT use.

And developers actively using AI for coding can easily spend more than $20/mo for the API.

There are people spending $10-15/day in OpenAI API usage working through Cline.


It likely doesn’t matter if it is for training in this case. I can’t send our codebase to a friend even if they would never look at it or use it.

The issue would be based on the terms of employment and the software license. There’s likely a provision that just says “don’t share” regardless of what the other party will use it for.

To OP’s point, if your company is paying for the sub, then sending the codebase data would be an approved use of the codebase as part of your job.


The idea of "do not send proprietary intellectual property to some rando" doesn't even cross your mind uh?

I hope your current and future employers never find this side of your personality :D


> The coding IDEs at $20/mo do not result in your IP being shared with the AI provider for training.

If you’re not running the model locally, you’re sending your code to them for analysis. Now ByteDance has it.


It is a lot outside of the US. Even with an ok developer salary in Belgium, I'd have to really have a use for something to pay $20/mo.


I do a lot more work with cline and aider than without. For now that translates to a lot more money. In the end it's probably just going to be part of your normal job; you will just deliver a lot more in your 40 hour week than you did before. At this stage in the AI hype, I would insist on more or code side projects or do multiple projects. I write a lot more code and still have more time than before sonnet came out.


If it improves your performance by 1%, then for a salary of $2000/mo, that $20/mo is breakeven.

If it benefits you more than 1%, then you're in profit.

Of course, if you're in a job that doesn't actually care about performance, and performing won't lead to better salary at some point, then it may not matter.


Only if it increases your compensation by that 1%?

I get making economic/stats based analysis like this, but is your boss going to notice that 1% to give you a raise they otherwise wouldn’t? Probably not…

Your company culture can be performance-minded and this still be true.


What a weird way of thinking. If you act so pedantic anyway, you can stop coding simply 1 percent earlier each day, since your boss would not notice anyway. Personally I live in Belgium and I would be glad to pay 20 euros per month if it increases my productivity, as noticed by my boss or not.

The real problem however is, I cannot simply share my employer's repos to be absorbed by any LLM out there. So I use only the tools that my employer provides and approves of. Currently that is Microsoft Copilot chat/RAG via my work account. It takes some copy/paste and adaptations of problems/solutions but it is much more efficient than using SO. It is also a great teacher that never gets tired of my plenty why/how questions.

In my view, the future is that LLMs can train on entire private code repos until it understands its ins and outs. Currently it would need to fit in the context window, hence you need to babyspoon it, as I understand things.


I was only embodying the mindset in the comment I was replying to. Thinking in terms like “My time is worth $X, so if I do Y thing that takes Z minutes, that’s D dollars saved.” It gets really hairy and leads people down false paths where they think they’re being super “rational” all the time.

In this case it tricks you because it assumes that the LLMs increase productivity and launders that into the calculation. For me, and many people, LLM usage decreases my productivity.


Fair enough!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: