> "As creators of software and websites, we often get caught up in the never-ending pursuit of optimization. We constantly strive to make our code more efficient, our algorithms faster, and our page load times shorter."
I'm completely agree with this. The people who like to dig deeper usually find structural issues that help you organise your tech debt and keep it maintainable. The other ones usually keep the ship moving forward and the product team happy, applying band-aid fixes where necessary. The key is to ensure that they are talking to each other!
> The key is to ensure that they are talking to each other!
Yeah this is exactly how I see it too. You need both of these people. Even being a rabbit hole person myself, I know it is easy for us to get lost. But I think the issue is we are fewer in number and what we see isn't visible to others. But there is a deep synergy between our groups that we must balance.
Classic MBA vs PhD problem? In some domains people refuse to hire PhDs because of analysis-paralysis; they prefer the stupid MBAs who will take risks that PhDs will never take. In other, they avoid MBAs like plague because they are shallow know nothing organisms that leach on revenues.
The article gives some examples about Go, and lists interfaces as something you should use, because the performance cost is tiny.
As someone who was formerly involved with the canonical Go style, I wanna clear up the reason why we said you shouldn't overuse interfaces. It has nothing to do with performance. (The difference between a "virtual" and "static" dispatch is not worth thinking about in Go.) It has everything to do with readability.
If you return an interface, I can't figure out what happens when I call it. That's fine with something like io.Writer, because it's a good abstraction and, anyway, I can guess. But hiding business logic behind an interface is bad, because I usually should understand what happens when I do something like foo.IssueReceipt(), and if foo is an interface, then I can't really know.
The rule in Go has for a long time been "return concrete values, accept interfaces" for precisely this reason.
Another relevant soundbite is that Go interfaces are an "accept-side" construct, unlike Java's, which are a "declare-side" construct. This is a way of saying you shouldn't define an interface near its implementation, you should define it near where it's accepted.
You might notice that this makes it really hard to do dependency injection in Go, and this, too, is intentional.
When writing code in any language, readability and maintainabilty should always trump performance. Go is fast enough, if it's slow then it's more likely you need to rethink how you're solving the problem then optimize your code.
Make it work, make it pretty, make it fast, in that order. And don't optimize without measuring, another trap that many people (present company included) fall into.
>Make it work, make it pretty, make it fast, in that order. And don't optimize without measuring, another trap that many people (present company included) fall into.
This view is common nowadays, but I think people need to dial that back by maybe 30%. Too often, it's used as an excuse for software that's extremely wasteful, when with minimal tweaks it could be efficient.
I am a "simplicity uber alles" kinda guy, and even I think you should add a line of code if it'll shave off half the runtime.
I've heard people quote this, and the infamous Donald Knuth line, as an excuse for not knowing how to do basic things, like binary search. I've even heard people complain about caching network IO without a benchmark to show RAM is faster than DNS.
Programmers, more than other people, have a tendency to take a pithy soundbite and make it their life philosophy, and I'm saying it's better not to.
If you don't like returning an interface because the caller can't tell what it does, why is accepting an interface any better? Don't you end up in the same conundrum of not knowing whatever it is that you need to know about foo.IssueReceipt() if someone passed foo into your function?
One is call-site flexibility - if I have a function that takes an interface, I can call it without an explicit cast with either an interface or a concrete value. If it returns a concrete value, I can assign that to an interface or a concrete value. So far so good. But both taking a concrete value, or returning an interface, force the call site to match the decision.
Another reason is that you usually read the call site first, then you go inside the function to look at it in detail, so you already know what concrete values is being passed, even if the function only sees an interface.
Finally, it's a local change to change an API to accept a concrete type after it has been accepting an interface, but changing an API that used to return an interface to return a concrete value could potentially involve a refactor of the whole codebase. (In fact, one such refactor motivated this rule.)
Of course it's not a hard-and-fast rule. Go style also has you return error, which is an interface, and the standard library passes around io.Writer and os.Stat all the time. But for business logic, I think it's the right rule 95% of the time.
I probably over-stated that a bit. I don't think there's a hard stance against it.
I do think it's rarely a good way to structure your program, because it makes it harder to know what's happening, and the goal should be to make it easier to see what's happening, both for the programmer and the poor devops guy who has to debug it when it goes down in production and people are yelling.
Of course the real world is complicated, and DI isn't always avoidable. You are right that Go's accept-side interfaces are neutral w.r. to DI.
> Now that Go has yield, you can avoid using channels even more.
I’m very confused by this statement. To my knowledge, and after a good 5 minutes search, go does not, in fact, have yield.
Almost looks like an AI hallucination.
edit: the post itself is tagged with “ai” while its content doesn’t mention AI at all. To me this gives credit to my theory this is an AI hallucination and this post was in part or totally generated by AI. While I’m amazed by and excited about AI, this kind of generated content makes me worried it really will contribute to the fast enshittification of the Internet.
There's an experimental feature in Go that lets you write python-like generators that work with range() and the callback is called `yield`. Here's some details: https://go.dev/wiki/RangefuncExperiment
That's probably what the post is talking about.
(I personally don't love this change, but now that it's clear Go 2.0 won't happen, it seems like some of the more outlandish ideas are making it into Go 1.x. Ah well.)
Interesting, I wasn’t aware of this experimental feature.
I still think the post was AI generated. Reason being that “now that go has yield” implies this is an actual available feature of go. It’s not. And I’d find it very surprising someone would refer to an experimental feature without mentioning it’s experimental.
One thing I have noticed that a lot of devs don't know what a profiler is and even less know how to use it. They also don't have log files that would allow analyzing where the code spends most of its time.
Telling people to "strive for balance" sounds wise but never really lands, because humans are inherently unbalanced in our desires. Everyone has different priorities.
Premature optimization is the root of all evil -- DonaldKnuth
In DonaldKnuth's paper "StructuredProgrammingWithGoToStatements", he wrote: "Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
Write the simplest code first - get it right, if too slow or uses too much resource then set a measurable goal what a good speed or resource needs to be and then optimise until you reach that goal.
Exactly, today it seems like an excuse to not think at all, and even if you call out something what will definitely not scale (e.g. your stupid O(n^3) approach will blow up even already going to 100 items, and you know your max will be more around 1000, just as an abstract example)... and those guys laughing at you with premature optimization blah at the same time well knowing and muttering "the prototype becomes the product" and you know where you will end with this planned tech debt having consequences on the whole architecture.. all could have been avoided with just a little more being less lazy and telling bs about premature optimization.
You’re calling this “just an abstract example”, but it’s actually pretty easy to be incredibly slow if you get your ideals from medium articles.
The ESB team I have to work with, for example, has a hard cap of 8,000 lines (of XML) in a single file due to their shit performance. Why? “Because recursive, immutable code is ‘easier to reason about’”
For starters, I don’t even buy that recursive and immutable cause ease to reasoning, but forgetting that, if it’s causing computers that should be capable of processing files that are millions of lines long to suddenly come to a crawl after just 8,000, we maybe should reconsider this approach.
"The ESB team I have to work with, for example, has a hard cap of 8,000 lines (of XML) in a single file due to their shit performance. Why? “Because recursive, immutable code is ‘easier to reason about’”"
Bullshit like this really makes me want to leave the industry. The more "professional" and "business" programming gets, the more bullshit happens.
> Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs
Do they really? I really don't encounter that. I find premature abstraction more common.
But let's assume people are constantly writing for loops instead of using .map... Is that really a bigger problem for software than vague requirements which change before they were even properly understood in the first place?
I find this quote really overused and just... Not relevant to my experience as a full stack web developer.
Yes I should have been clearer, I didn't mean to suggest it was wrong at the time. Just that it may be wrong now. That could be a limitation of my experience though.
> Do they really? I really don't encounter that. I find premature abstraction more common.
There's a pithy quote about that too, by Alan Perlis: "In programming, everything we do is a special case of something more general -- and often we know it too quickly."
Quoting this phrase is a major pet peeve of mine. In an age where applications struggle to do such simple things as displaying text and images when running on supercomputers 10,000x as fast as the ones Knuth had when he wrote this, you know the problem isn't "too much optimisation".
What Knuth means is "don't waste time microptimising parts of your code which aren't bottlenecks anyway". Yet when this is cited it's basically taken as "don't give a thought about optimisation until later on, instead do things the easiest way for you as a dev".
> don't waste time microptimising parts of your code which aren't bottlenecks anyway
The irony is that doing so barely optimises anything. The optimisations aren't so much premature as... Not optimisations. I'm a big fan of Casey Muratori's video where he compares 3 things commonly called optimisation:
- actual optimisation, which is detailed work involving profilers and evidence
- fake optimisation, like using fast inverse square root in a web app because you heard it was faster
- non-pessimization, where you just try to write straightforward code without introducing major performance regressions without good reason
"Premature optimisation" is probably just "fake optimisation".
It's also just something that all engineers do. EEs don't concern themselves about the power-draw of an LED on a space heater. MEs don't work hard to make a piston last 100 years in an engine that will last 10. It boils down to 'focus on important things, not unimportant things'.
Exactly. He's not saying "don't think about optimisation until you've written all your code". He's saying "don't try to optimise until you know where it's needed the most". In many cases that does mean write a minimal implementation first. However in some cases knowledge, experience and early profiling will give you some pointers before that.
He also didn’t mean to write known to be shitty, slow code, and yet, we have an entire communities of programming paradigms that argue you should do just that, arguing that “premature optimization” is anything that “doesn’t feel fast enough”.
What is the definition of “fast enough”? It’s impossible to tell because for those exact same people, they got their guns cocked and ready to fire a “HArDWarE is ChEApER tHAn DEvEloPErs TiMe!!!!!1!1!1!1!1!!” At you the moment you suggest that maybe needing an entire 32 core Xeon server with a tb of ram to serve 20 requests a second is MAYBE in need of optimization.
The most important thing for performance is to get the architecture right. You can do micro-optimizations any time but architecture is forever, you typically can't fix that after you've shipped it.
Almost all performance in modern systems is architectural in nature. Architecture doesn't live in 3% of your code, it lives in most of it.
Optimizing performance in code is mostly not so important anyways, because it can be done later fairly easily.
Architecture (especially infrastructure) are much more critical. Here, if you don't account for performance (or at least performance optimizations later on) then you might run into big problems later.
>Write the simplest code first - get it right, if too slow or uses too much resource then set a measurable goal what a good speed or resource needs to be and then optimise until you reach that goal.
So you write the code with algorithm X. You potentially might not know the complexity, depending on how extreme you are with this philosophy, but let's say it's O(n^2). You run it on your test data. It's too slow because of some unneeded deep copies. You change that. Now it's fast enough. It gets merged.
3 years later, you're experiencing integration test failures, because scaling up your system has caused this algorithm to make things time out. No problem: you'll simply look into it and optimise it. Except you can't. Thanks to leaky abstractions, threading-models, or just an inherent quality of the code, changing this algorithm to something with a lower complexity becomes a major refactor. Pray you don't have external APIs or stakeholders depending on it.
So, not only have you wasted the time polishing the O(n^2) algorithm, you also now have a huge technical debt from what can simply be described as inadequate forethought.
The 'simple code now, polish later' process - which is really just 'move fast and break things' in a different shade of paint - means you never learn to internalise the easy wins. It also paradoxically portrays optimisation as both too complex to do initially but trivial to do later. It's not the latter. Having a good gut-feeling of algorithms or the Kafkaesque way memory works is not something that is learned over the course of a JIRA ticket - it's learned over the course of years, treated as any other code-quality concern. For comparison, you could close many tickets quickly by using a global variable to pass data around. But you don't. You make sensible, idiomatic changes to make sure things are stored and accessible where makes sense. Optimisation is the same: there is a lower bound below which it's unacceptable, even if it's not something that fails requirements.
StructuredProgrammingWithGoToStatements was written in 1974. It doesn't track well to modern computing models. I have more L1 cache than he had main memory. I do accept that the vast majority of bugs were from this, 50 years ago. I don't accept that's still true today. It's still a useful thing to teach new learners so they learn how to prioritise their development but as professionals creating novel things we need to have a certain level of planning ahead. And pride.
It's also a tautology. If the optimisation is worthwhile, it's not premature. If the optimisation is noncritical, it's premature.
My example above is not fanciful, by the way. Off the top of my head, Windows, Python and C++ all had/have trouble with some performance loss due to the fallout from earlier decisions and difficulty rectifying it once the world is built on top of it. I've run into similar problems at my own jobs, particularly anywhere that uses microservices. You can't avoid all of these but you should avoid those you can.
I really wish this "problem" was more common!