Second all of this. 3% is nothing for an exec if they're invaluable to the business. If this person leaves, how would feel? Glad you still have that 3%, or panicked because you can't run the company without them?
> Organizing your local software systems using separate processes, microservices that are combined using a REST architectural style, does help enforce module boundaries via the operating system but at significant costs. It is a very heavy-handed approach for achieving modularity.
One of the most effective and succinct criticisms I've seen of microservices: an architecture hack for modularity.
The irony is, it's rarely an effective hack. If you don't get your service boundaries right, modularity goes out the window, and you're stuck with a bunch of network calls that should be function calls.
Yup I've seen it at just about every company I've worked for. A cluster of microservices that is really just a distributed monolith. Is it really a microservice if changing one part of it requires making echoing changes to every service downstream? Is it really a microservice if one part breaking causes the entire system to fail spectacularly?
I feel like the pendulum is starting to swing the other way as companies learn that microservices aren't the end-all solution to their process problems, and that prioritizing new features every sprint and constantly pushing housekeeping, bugsquashing, and system improvements will result in a broken collection of microservices just as quickly as it resulted in a broken monolith.
> prioritizing new features every sprint and constantly pushing housekeeping, bugsquashing, and system improvements will result in a broken collection of microservices just as quickly as it resulted in a broken monolith.
Now, HTF do you convince a C-Level of this truth? Seems like there is never any time for any sort of housekeeping until the house is literally on fire and the whole world is looking at our smoke pillar.
Even then, they'll usually be like "Ok, take 2 weeks to fix all our architecture problems and then get back to work on new features!". As SOON as the fire is doused, they want to move on rather than cleaning and removing the flammable material and the flame that keep starting the fire.
And just like you don't ask permission for writing down a line of feature code, you don't ask permission for maintenance work. You just do it and schedule it accordingly to business needs to the best of your ability. And make sure not to apologize for doing your work or treat maintenance work as "unimportant" because that is how it will be perceived.
It does get tricky when you have a manager defining tasks and you have yet
to determine whether their job is to be a technical manager (and is adequately competent) or a nerd babysitter/translator. In the former case it's good to run maintenance projects by them if they need active, dedicated time as opposed to something you can mostly do during downtime. In case it's the other kind of "manager", and you've already tried and failed to convince them to prioritize basic stuff, you have to just put it inside other estimates and rope in the rest of your team to do the same and/or polish your résumé.
It's tricky when it gets to the point that you need specific tasks to clean things up.
But lots of developers think any amount of cleanup needs to be scheduled. No, it does not. When you build on a feature, you modify the feature so that you can actually build on it in a reasonable fashion.
Lots of developers will, instead of doing that, shove round pegs into square holes. There's nothing about a task that says you have to do it in the sloppiest, shittiest way possible. It's like the plumbers that cut through your floor joists because that's the easiest way to get their drain in.
> You don't, it's not his job to care, it's yours.
Last time I checked it was the job of Project Manager to decide which tasks land on the sprint backlog as he decides the priorities of what has to be delivered.
Literally written in the job description.
As a developer you can create a plan/tasks about decreasing technical dept, but its never your work to prioritize that, coz you dont know whats the priority of upper management..
Unfortunately, us bottom guys don't generally get to pick whether or not the org is bottom up or top down.
Guess switching jobs is always an option, but what I've gone through is basically watching as an org grows it getting switched from being a bottom up to top down org because the C level guys want more control over everything.
In any sane system, technical debt isn't backlogged and prioritized by project managers, it's simply a tax on all work.
The point of "just doing" your technical debt isn't that you're stealing time from the PM, it's that when an engineer estimates their work, story points the next feature and everyone is deciding on the time frame, that process should implicitly include ~20% of time for you to continue to fix and update and improve things.
Most PM's I've worked with do not care about that 20%, in fact, they're happy that we're taking the time to do it. They just want accurate estimates so they can tell stakeholders a realistic time frame and somewhat meet that goal. All you have to do is bake your 20% in and everyone is happy.
Honestly if I worked for a company that micromanaged my time so closely that I was prevented from cleaning up technical debt and fixing things, I would still clean things up and be good at my job and let them fire me for it (or be searching and leave anyway). No PM, no middle manager, and no executive can ever make me sacrifice the quality of my work. They can only replace me with a monkey that won't have the same issue.
It's the job of the PM to decide what to build. It's your job how to build it, which includes taking time for an appropriate level of quality.
It's easiest to improve the codebase where and when you alter it for feature work. This is also a good control for yourself to stick to relevant refactorings. This should be your professional autonomy.
The remaining like, 5 to 10 percent of the time that you need 'dedicated' tasks, should be explained to and supported by the PM because he trusts you due to having a track record of delivering.
> It's the job of the PM to decide what to build. It's your job how to build it, which includes taking time for an appropriate level of quality.
Cyting my current Project Manager:
„We are short on deadlines, dont care about quality, just deliver crap and we will fix it overtime (we will not /s)”
You live in some unrealistic world.
Companies have deadlines set together with clients. Deadlines are signed by the ppl who have no idea about how technically challenging it can be to build something.
You are hired to deliver ON TIME. Nobody gives a demn about quality when new client is banging on the doors.
My quarterly deliverables are defined by the product guy above me. His priorities are defined by the product org. Ultimately, the CTO/CEO are the ones that set and tell them to allocate stuff.
I can RAISE issues that need to be addressed to product, but generally speaking, dev doesn't have a seat at the table when it comes to prioritizing stuff. Instead, that's all driven by sales. Sure the company SAYS they do, but the practice isn't there. Dev requests are routinely ignored to the point that dev stops making them.
Perhaps this is just my org that's dysfunctional, doesn't feel that way though.
Your org is most definitely dysfunctional. If you meant to say your org doesn't feel too out of ordinary, then it's not wrong depending on what industry sector you're working in.
I'd summarize my comment again: if management doesn't let you do job properly you can suck it up, leave, or just fix things without asking anyone. But ask yourself, do you care that much about that company to not just move on?
It's what I tried to address in the second part of my comment.
A lot of teams don't have dedicated PMs in the first place, but for those who do get those jobs there's an extremely wide and well-distributed gamut of competences and skill sets, even more than developers I'd say. And you have to figure out whether you got a proper one, in which case it's the right thing to run such tasks through them when non-trivial, and then there's the rest. It depends on which market bubbles you've worked in to determine whether this latter group is an outlier or the norm.
dot dot dot Hello!
you do know the priority of upper management: they prioritize their own personal best interest, and as much as the board and CEO manage the incentive structure, this will be aligned with their fat bonuses.
so, dot dot dot. do the same: if something creates pain for you, dispose of it.
- I can work with the current architecture and ever inflating accruing debt (yet slowly inflating) feature times which ultimately is acceptable by PM. This will get me pay raises, bonuses, and praise.
- I can work on the annoying stuff which product does NOT want me working on, does decrease feature times, and does decrease debt. This will get me chastised, if I get any praise from my local team members, I don't get it from the people paying me.
Working around the system is punished. Working through the system is rewarded. I get my fat bonuses by making my PO/PM happy. I don't get fat bonuses making my life easier.
> Now, HTF do you convince a C-Level of this truth? Seems like there is never any time for any sort of housekeeping until the house is literally on fire and the whole world is looking at our smoke pillar.
I've had minor successes with advocating for a tik / tok approach, ie every couple iterations you have one iteration where new feature work is banned vs cleanup, bugs long ignored in the backlog, etc. I haven't found a simple way to explain this, but product centric executives do seem to see "ok, they get one iteration of what they want in trade for me getting what I want in the others" as a reasonable trade to keep the dev team happy.
It's probably the predictability and bounded costs that help.
I've been on both sides of this. I've been on projects with tech debt, but I've also had lots of experiences where new developers join a project and immediately, before even seeing the code at all, announce that they enjoy "making code clean" and "cleaning up tech debt" and where is the tech debt they can refactor away? Or they'll join, spend literally 20 minutes reading the first file they come across and then start pronouncing the decisions made by their new collegues 18 months ago as obviously "legacy" or "hacky".
That's usually a sign of trouble. The whole concept of tech debt or "cleanness" is very under-defined. One man's tech debt is another man's pragmatic solution, or even clean design.
The last company I worked at is basically being killed by this problem. The technical leadership is weak and agrees to what the engineers say too easily. Theoretically there's a product team but they aren't technical enough to argue with the devs. The moment the company started getting product/market fit and making good sales the devs announced the product - all of three years old - was riven with tech debt and would need a massive rewrite (it didn't, I worked on it and it was fine). Three years later their grand rewrite still didn't launch. Utterly pathetic. The correct solution would have been to crack down on that sort of thing and tell devs who wanted to rewrite the product that they were welcome to do that, somewhere else. Or, they could get back to adding features that would actually make users happier.
I think a lot of companies have had that experience - the sort of devs who are obsessed with "clean code" and "tech debt" often exhibit extremely poor judgement and can end up making things worse. Especially if the product has a naturally limited lifespan anyway due to e.g. pace of change in the industry, it can be fatal to spend too much time on meta-coding.
My general rule is that if its been working reliably and contributing to revenue generation its not bad code. We have code over 10 years old that looks bad by today's standards but since those are parts of the system that almost never need to be changed then there is absolutely no reason to try to make them better because they are already functional and battle tested. The concept of modern code is just a compulsion of developers to sprinkle their own beliefs of good coding standards over code that is application critical and should be resisted unless there is a good case behind it (e.g those components regularly cause instability or need to be extended)
My method is radical transparency. I simply keep refactoring notes which I turn into user stories on the backlog. I don't take the PM seat as to when to do these user stories, yet have built up the expectation that it's an ongoing process and recurring thing.
The transparency creates a rational set. Clarity about what is currently wrong and what the benefit of the improvement is. It's kind of hard to argue against rationalism. When you do this calmly and with precision, you instill a perpetual feeling of guilt every time the PM tries to ignore it. Seed planted. 1-0 for you.
It's all in the open, the entire team sees it. That's different from a forgettable backroom discussion. The PM looks somewhat neglectful to everybody. That's 2-0.
You need to have/build a reputation of being constructive. Thinking along with the PM instead of being stubborn. So you let go of refactoring for two sprints because there really is an important business deadline, and then you bring refactoring back to the agenda. This demonstrates you're reasonable and flexible, whilst if the PM is to continue to ignore refactoring, they will stand out as the unreasonable. 3-0.
Continued ignoring of technical debt will inevitably lead to disaster, affecting real world results. Who will get the blame? Not you, you have your bases covered and it's on record whom ignored all the warnings. 4-0.
These are all mind tricks to raise the cost for the PM of ignoring technical debt. Ideally it's not needed and you just agree to reserve 20% of your capacity for it, to end the discussion once and for all.
Of course, one may also encounter a job hopping sociopath PM that doesn't care about any of the above. In that case, just inflate your user stories and do what you can.
To conclude, in a truly healthy organization one would flip the question. PMs have a tendency to be feature fetishists. It's not even their fault as internal organizations richly reward whoever has the most ambitious feature agenda, whilst everybody thinks that the PM that "fixes the basics" is a slacker. The user of the product, in the meanwhile, doesn't seem to be a topic of concern.
Rather than requiring extreme evidence for a software engineer to do the basics of their job, ask a PM per feature what its justification is. Show me the business case.
They can't. I've been in this industry for 2 decades and the consistent experience is that every long term software project ultimately ends up in 80-90% of non-value or negative value, also known as "unrewarded complexity".
If I were PM, I'd spent 50% of each sprint on refactoring, simplification, and improving the core of the product that delivers business value, rather than building unrewarded complexity. For the other 50%, I'd take the team to the bar, which I consider a good use of time.
You'd find me a ridiculous PM, but I'm deadly serious that the product outcome would be better. You can't have technical debt when you don't do loans. Plus it's more fun.
This comment reads like a blanket statement but I feel like it has to be context specific. There are some cases where ignoring tech debt and grinding out code is the way to go. There are some cases where it’s not. Learning how to identify those is the hard part.
A better first approach would be to try and remove roadblock and bureaucracy that are encouraging developers to actually consider microservices. Those are of course not the only the only arguments for microservices, but I do think they're a big one that more often goes unsaid than the modularity. You can have modularity in a monolithic app, but for some reason many teams don't actually take modularity seriously; they ask that everything be object-oriented and call it a day, as if writing a `module` in Ruby means you've made things modular. Microservices become appealing when the process to have monolithic code reviewed and deployed is long and painful, since they are by definition small, decoupled, and simple to deploy. Of course it often doesn't quite work out that way, in which case a monolith would have worked anyway.
But it's unlikely things will change in coding culture so long as there's the perverse incentive for businesses to encourage their engineers to use hacks out of expediency. Hacks inevitably make things really hard down the road, create mysterious problems that are hard to solve, and the solution is often to add bureaucracy or switch to microservice architecture.
I've worked with a variety of monoliths. The biggest complaint I've had from companies who utilize that architecture is that I shouldn't have to run entire pre and post pipelines to make a minor change to an internal function that only affects one package in the entire repository.
This is solvable, but it is a problem - one that causes a lot of engineers to consider starting fresh in a new repo anyway. Multiple repositories inherently solve this for you - but bring about other problems.
In my opinion - most projects don't need microservice architecture in the sense of separated binaries. Typically, they can be written with clear boundaries within the monolith such that, should the time come when scaling concerns are real for certain applications, then they can be ripped out as needed into their own binaries.
The problem you describe is the inverse of the common testing problem you see with microservices (a dependent service changed without kicking off downstream tests against the new version). I'd take your version any day... It's easier for me to make tests faster and/or correctly identify which should run on a change if they're all in one repo as opposed to making network calls to each other across a service boundary. Also, clearly always better to run superfluous tests than to skip necessary tests.
> The biggest complaint I've had from companies who utilize that architecture is that I shouldn't have to run entire pre and post pipelines to make a minor change to an internal function that only affects one package in the entire repository.
Perhaps that sounds like the problem is not with the monolithic architecture but with the internal processes? If you "shouldn't have to" do something, then why do it?
Monolithic architectures default to this kind of methodology, though. You have to instrument tooling to identify what tests should be run when these files/services/etc get affected in order to optimize the developer flow in a monolith.
With several repositories you get this for free, with a trade-off of other difficulties.
A lot of teams use tools that can do this sort of thing OK out of the box. I've set up CI to only run affected tests and use incremental builds in the past. My current project has incremental CI.
The core problem is actually dev teams. Every time I've tried to implement incremental CI when I wasn't in a position to force it through, people fought against it, complained, moaned etc until higher level management gave in. The problems were:
1. People liked the feeling of every test passing every change. It made them feel safe and like they could point the finger at CI if something went wrong, because if every test passed it's definitely not their fault, right? But this leads to slow builds and then they complain about that instead. However this is preferable because slow builds are the project manager/TL's problem to solve, not theirs, so there's a built-in bias towards over-testing.
2. If there's any issue with the build system that causes one test run to affect another in some way (shared state of any kind), then incrementality can cause hard to understand flakes. Sometimes tests will just fail and it'll go away if you force a truly clean rebuild. Devs hate this because they've been taught that flaky tests are the worst possible evil.
3. Build systems are generally hard-wired for correctness, so if you make a change in a module that everything transitively depends on, they'll insist on retesting everything. This is reasonable from a theoretical perspective, but again, leads to very slow builds and people finding ways around them.
4. Work-shifting. Splitting stuff out into separate build systems (separate repos is not really the issue here), may appear to solve these issues, but of course in reality just hides them. Now if you change a core module and break things, you just won't know about it until much later when that other team updates to the new version of the module. But at that point you're working on another ticket and it's now their problem to deal with, not yours, so you successfully externalized the work of updating the use sites to someone else.
Competent firms like Google have found solutions to these problems by throwing hardware at it, and by ensuring nobody was able to create their own repositories or branches. If your company uses GitHub though, you're doomed. No way to enforce a monorepo/unified CI discipline in such a setup.
Out of curiosity, how long is a full build on that place?
I wonder what does it take to give up those points, as they are very good things to have. I can't imagine I would give up on them if builds took around an hour, maybe a day.
I don't quite recall. IIRC it was an hour maybe, so it wasn't like the builds were extremely slow, but it means people didn't want to run the tests locally, so if you're trying to get your work committed and then there's some failure CI picked up that you didn't each iteration can be an hour, and that made people feel unproductive and slow because some of them struggled with context switching back and forth and even the ones who could do it didn't like it. Which I totally agree with, context switching and small delays can be painful to feeling productive.
Obviously full test coverage is a very good thing to have. But, people hate waiting. They ended up having a proliferation of smaller repos and replacing it with manual rec/sync work which was IMO not a good idea, but probably felt more like "work" rather than waiting.
Interesting question arises here. Given most shops are now using interpreted languages, is it possible to make a change to say a .py file and ONLY deploy that .py file to production? i.e. do incremental releases
The issue was rather about the fact that changing that single .py file can have impact on a service A and service B which depend on service C where the file was changed.
In microservice world, you have to run e2e tests because thats the only place where you can really guarantee the system will work as you expect.
Pacts wont solve that, integration test - the same, units w/e.
Running a monolith test is overall faster than spinning 20 microservices, beinging every db to specific atate and running tests.
I don't know if "most shops are now using interpreted languages" is an accurate statement :) (I actually don't know).
But I think that's definitely an interesting idea (likely with some small levels of extra caution with some specific high profile files, like an API definition/endpoint or something).
When the microservice craze took off a lot of people asked me for opinions on it, and said the same thing then I say now: it more or less boils down to Conway's law. If you have multiple teams that need to iterate and deploy independently the overhead may be worth it. But if you're just a small startup with a single unitary dev team, and that generally just deploys a new version of any changed services all as one lump, it's an insane amount of complexity and overhead.
Agreed. The kicker is knowing what those boundaries are and avoiding them. I really enjoy working with microservices for async tasks, and I think they're really well-suited to them. I really dislike inter-service dependencies. And of course, there's nothing wrong with a few, small monolith services. It's all about the right tool for the right job for me, and being too prescriptive to a particular architecture comes at a high cost.
It kind of feels like Sid is lying through his teeth here, as a person who deploys and maintains a private Gitlab installation, along with a whole host of other core platform services for internal use. Gitlab is by far the most modular off-the-shelf product I've encountered outside of JFrog's Xray. Look at their official Helm chart: https://gitlab.com/gitlab-org/charts/gitlab. Gitlab itself consists of 14 sub-charts and it also bundles 4 third-party sub-charts for object storage, a web proxy and ingress controller, certificate management, and the internal container registry. Gitlab without the third parties I believe consists of 15 distinct containers.
I don't think it matches what most people think of when they hear "monolith." It is absolutely not a single process only communicating between components via function calls. Many of the Gitlab core services, such as Gitaly, are written in Go, as well, not Ruby, though they also have "gitaly-ruby" as a testing service that can be used by developers not comfortable with Go.
I've seen several times where the overhead of the network request is dramatically higher than the work being preformed.
The worst example was a validation microservice that did things like "is True" or "is greater than 5" one element at a time on a large object. So you'd have an object with 50 elements and it would make 50 requests to the service. This was in the context of a batch processing job that handled millions of items, so billions of requests end up being created.
I tried to explain that each network request probably does a few thousand things like "if request_type == 'get'" during it's lifecycle on both sides of the transaction, but nobody got it and I quit soon after.
Resume driven development is one of the most depressing things I keep running into in my career. Wanting to learn and grow is great, but forcing customers to suffer the repercussions of your learning because technology/design pattern x is in vogue is closing in on malpractice.
Next time you interview with someone, ask why they converet a system to Microservices at their last job? If they don't have an actual reason, bin them.
A lot of people seem to underestimate how fast computers are; a lot of times you don't need to scale horizontally, even at "webscale". And even when you do, it's not like you need microservices for that; sometimes you just run a new instance of $app, or spin off one thing to a new service (while similar to "microservices", it's not really the same thing as the bulk remains monolithic).
Right. Then HTTP requests being share-nothing they horizontally scale in a single instance, too. Just spin more cores.
This microservices craze has been mind-boggling. As for many other fashions, it’s excruciatingly difficult to make young or even average senior devs understand it’s bullshit at 90%.
When Netflix does it and the self-appointed expert bloggers praise it, as a CTO or team lead you’re pretty much screwed.
Why, SOA is a reasonable concept, and does help scaling. Say, billing, ETL and batch processing, and Web backends can live as separate services all right.
Going with micro-services is another story, and it makes sense in a more narrow gamut of circumstances. Say, microservices may be a great fit for AWS Lambda-style deployments, but this assumes spiky, sparse load patterns.
You can deploy your whole monolith on one single lambda and it will scale the same. It may even help to keep it hot.
Services, micro or not are only really useful when load factors are vastly different and you’re not serverless. Or when you have sufficiently many large teams and a sufficiently large codebase that a single deploy becomes unmanageable because too much communication / coordination is needed.
This probably is the reason why it works in some settings. Anecdotally, as someone with more domain/functional knowledge and operations knowledge than knowledge of frameworks, I found microservices architecture with good functional test coverage a better way to deal with a team of programmers like me (basically, a typical team in an offshore IT consultancy building enterprise applications using SpringBoot and Node.JS). Ship the service as early as possible with available talent and then get someone really good with that programming language or framework to deal with performance bottlenecks within the microservice. I see it basically as a way to limit the blast radius of the applications. Of course, as you said, get the boundaries wrong and you have a bigger problem.
It's worse, because now you can't fix your bad boundaries, because there are different teams working on each side, and different projects under different PMs.
These details are wild, scary, and fascinating. Fascinating from the perspective of how easy it is for foreign governments on the other side of the world to try to scare the shit out of US citizens by hiring some scumbag PI for probably as little as $5k.
> Beginning in September 2021, Lin hired a private investigator (the PI) in New York to disrupt the campaign of a Brooklyn resident currently running for U.S. Congress (the Victim), including by physically attacking the Victim. The Victim was a student leader of the pro-democracy demonstrations in Tiananmen Square in 1989, who later escaped to the United States, served in the U.S. military, and became a naturalized U.S. citizen. In September 2021, the Victim (then living in Long Island) announced his intention to run for a U.S. congressional seat on Long Island in the November 2022 general election.
> In hiring the PI, Lin explained that if the Victim was selected during the June 2022 primary election, then he might be “elected to be a legislator. Right now we don’t want him to be elected.” Lin emphasized that, “Whatever price is fine. As long as you can do it.” He also promised that “we will have a lot more-more of this [work] in the future…Including right now [a] New York State legislator.” Lin explained to the PI that Lin was working with other unidentified individuals in the PRC to stop the Victim from being elected to U.S. Congress.
Back in college, I did a bit of "spying" for foreign companies. Nothing bad spirited, of course. They simply wanted me to photograph the entrance of the offices of local companies they partner with to see if the local company is at the address they claim to be.
It was a good pay for a very easy job.
However, considering that Kim Jong-un's brother was killed at the airport by making a civilian to spray neurotoxin on him under the pretext of a camera joke, I don't see why foreign entities wouldn't be able to hire unsuspecting citizens to do shady stuff(that can be plausibly explained as jobs).
At least partly because interfering with the election of the US president is quite a bit more significant than interfering with the election of one representative.
On the flip side, though, I could easily see a government spend very little money (say, a couple hundred thousand dollars) to disrupt the campaigns of several tens of potential US Representatives. Given how things work in the House, a state actor could nudge things in the direction they want by disrupting a fairly small percentage of elections.
Presidential election interference would seem to cost a lot more than that, and be a bit more difficult to hide.
I feel for this person, but there's a line in here I found revealing:
> Unfortunately, Dgraph was always undervalued by investors.
This feels a bit like blaming the investors for why the company failed. To put it bluntly, there's a point at which you raise capital and build on vision. Your early investors and users are taking a giant leap that a. you can execute on the vision and b. customers will one day want to pay you for that vision.
Then there's point at which the rubber hits the road, and either customers pay you money for the thing you built or they don't. If they don't, you need a really convincing story to convince your investors they shouldn't cut their losses. It's a shame the company failed, it seems like they were working on some cool technology. But based on this post it sounds like no one wanted to pay for whatever it was they built.
Their last Series A [1] was led by Redpoint Ventures which means you would have Tomasz Tungus involved in this decision. Of all of the VCs I've ever seen, heard or read about he is the most metrics driven. He regularly benchmarks companies against each other and has a deep understanding of the health of each aspect of a business. I would highly doubt this decision was personal or politics driven in any way.
At a guess I would say that they did an analysis of the company and recommended it be shut down. And the other VCs i.e. the Australian trio of Airtree, Blackbird, Grok who are far less experienced in the VC game went along with it.
One of the biggest opportunities in technology right now is to re-run the Next/Apple playbook and create a new hardware/software technology company focused on developers. Good quality hardware married with a great OS / window manager. Most of the software we use to create software are electron apps, a thin layer of native code to run a bunch of javascript. The main notable exception is XCode, but the ecosystem of non-ios/apple developers is far larger. Wish someone would do this. I'd pre-order tomorrow.
If by "biggest opportunity" you mean "biggest opportunity to lose all your money and chase a pipe-dream" then I agree :)
'Next/Apple' isn't a quick playbook - it's an over 30 year R&D effort to create a hugely complex software and hardware business, and it spent about $100 billion in R&D to get its products where they are today (at the absolute cutting edge of technology). Writing your own modern OS and building/manufacturing good hardware to compete with this is difficult enough, and then you have the even bigger challenge of getting all the major software vendors to support your new platform.
"'Next/Apple' isn't a quick playbook - it's an over 30 year R&D effort to create a hugely complex software and hardware business, and it spent about $100 billion in R&D to get its products where they are today ..."
But isn't this much, much easier if you just piggyback on the Apple hardware ?
I always expected this to happen.
Circa 2008 or 2009 I thought that any day now there would be a linux distribution built specifically foe one single Apple laptop. No hardware issues, no gremlins, no moving targets - you would have a (very) fixed hardware target and optimize just for that. Then I, as a user, could just go to the Apple store and buy a nice shiny device and install MBAlinux on it and call it a day.
I really don't understand why this never happened. Further, in many ways it seems that the opposite of this happened - installing linux/FreeBSD is weirdly painful on Apple laptops which is unexpected since we all know what is inside of them and the installed base is huge.
So I would suggest that you could, indeed, build a hardware/software ecosystem - just let Apple build the hardware part ...
Linux is a complete mess that may likely never get fixed. The problem is people. It is a representation of democracy: a messy combination of half-arsed solutions that forms a workable cohesive. This is not a valid competitor to the Mac. It is a compromise.
Lets take Ubuntu as an example. Today you can get Ubuntu laptops that will work out of the box. Is that true tomorrow? Absolutely not. The next distro version will break something in the hardware. I have been burned by this twice now. At then end of the day the Apple premium is not really a premium. It ensures that they continue to support their legacy hardware for years. The people who bash the premium as some sort of "idiot tax" are actually valuing the software that runs on the machine at 0$. There are too many people in this world that don't understand how much effort it takes to create and maintain good reliable software. You see it on the app store where people can't fathom spending 99 cents and you see it in the bashing of Apple devices.
Lets assume that your hardware works beautifully with the current version. Then you actually look at the apps shipped with the distro. They are poorly made and do not form a cohesive OS. You are forced to hunt for other open source equivalents to basic stuff like "paint". Have you tried using the calculator or notepad equivalents? They suck compared the simple and easy to use Windows and Mac equivalents. This is something even Windows gets right. It comes from the fact that Canonical does not have the resources to build each app around a unified design and UX principle so they farm it out to the "open source community".
Finally, why do each distro version seem to break something on the same hardware year after year? There seems to be a serious lack of regression testing on these distros. For 10+ years I have witnessed how one version of Ubuntu breaks some stuff, fixes others and then the next version fixes some stuff but breaks previous working items. Then it gets worse, the subsequent version breaks previously fixed stuff again! I am forced to QA the entire OS every time a new release comes out and hope I don't miss something(which I always do)!
> This is something even Windows gets right. It comes from the fact that Canonical does not have the resources to build each app around a unified design and UX principle
Isn't it a bit on the nose that you accuse Linux of failing at the one thing that Windows is notoriously bad at, UX cohesion?
More seriously -- Linux reflects a different mentality and way of doing things. It is not for everyone. Downloading the software you want is the expected way to do things. I have no idea whether Ubuntu ships a "paint" replacement, but regardless, Pinta or Paint.NET are like three clicks away, thanks to the software repository approach.
Linux is more or less for people who want to experiment and configure things their own way, and make software that solves their own problems in the way that they want those problems solved. Creating a single, opinionated, out-of-the-box working desktop experience with perfect hardware compatibility with whatever bullshit proprietary-blob using silicon is out there is (a) hard, and (b) not what most Linux-using developers are interested in.
The people who use Linux largely recognize that yes, it is a compromise, but also that using Windows or macOS also represents a compromise. Having used Linux for nearly 15 years now myself, I can say confidently that the trade-offs for me weigh heavily in favor of Linux.
>Isn't it a bit on the nose that you accuse Linux of failing at the one thing that Windows is notoriously bad at, UX cohesion?
Yeah you can criticize Windows for trying to update their designs with Metro and the like but in reality, all the old apps that worked cohesively are still there even today. Ubuntu and the Gnome or KDE based distros never had this to begin with. Just multiple flavors of the same cruddy base applications since all the distro are using the same apps anyway.
>More seriously -- Linux reflects a different mentality and way of doing things. It is not for everyone. Downloading the software you want is the expected way to do things. I have no idea whether Ubuntu ships a "paint" replacement, but regardless, Pinta or Paint.NET are like three clicks away, thanks to the software repository approach.
Yeah thats fine but that unfortunately makes it a non-starter if you are looking for a direct replacement for macOS or Windows.
Yes Paint.NET/Pinta/GIMP are always trotted out when I post this example. Pinta has been an unstable mess every time I have installed it. Plus "paint" is a near-instant loading app that is several MB in size whereas Pinta is installing loads of supporting libraries because it is a more complex application. Your telling me that in 2021 they can't just ship a simple app to allow a user to just jump in and use to resize images or add some text to basic images? This hinders the usability of the system when I can't just quickly do a simple task and move on! It is as if the developers of this distros have never understood how a regular user uses a PC.
>The people who use Linux largely recognize that yes, it is a compromise, but also that using Windows or macOS also represents a compromise. Having used Linux for nearly 15 years now myself, I can say confidently that the trade-offs for me weigh heavily in favor of Linux.
The only thing that is a given is that any comment bashing Linux will ultimately attract someone like you that tries to twist and turn my words to justify it. I've seen it for 10+ years now without fail so i'll leave it at that.
Its a shame because I have looked at the messy bug tracker for Ubuntu and have tried to fix issues but then I stop and realize what is the point when it breaks again in some subsequent version of the distro. I wish someone would just dump a bunch of money, hire former Windows/Mac devs and properly build a lot of the supporting components of some distro, then all the other distros can roll up those better apps and then we have at least something that can be called adequate in 2021.
> The only thing that is a given is that any comment bashing Linux will ultimately attract someone like you that tries to twist and turn my words to justify it.
The only thing that's a given is that any post that proclaims the relative merits of Linux versus alternative operating systems will immediately attract posts like yours bashing it, so shrug.
> I wish someone would just dump a bunch of money, hire former Windows/Mac devs and properly build a lot of the supporting components of some distro, then all the other distros can roll up those better apps and then we have at least something that can be called adequate in 2021
My point is not that you're wrong to feel this way, but rather that you should recognize that "adequate" is ultimately subjective. Adequate for whom? Adequate how?
The Linux ecosystem is largely designed by and for people who are willing to tinker, willing to customize, who want to design software that scratches their own itches, and who aren't looking for a perfect out-of-the-box experience from a distro. A handful of people want to bring about "the year of Linux on the desktop", but they're a minority and even for them the interest is usually secondary to their own use of Linux.
There's no "twisting your words" required here. What you want is a near-perfect out of the box Linux experience. What most Linux users want is ... something else. My point is simply that that's okay. Linux doesn't have to be for everyone. Your problems with it are not everyone's problems with it. In particular,
> that unfortunately makes it a non-starter if you are looking for a direct replacement for macOS or Windows.
Most Linux users don't want a direct replacement for macOS or Windows. Maybe there's a class of "theoretical Linux switchers" out there who would switch and would be the majority of Linux users if they did, but they are not, at present, the majority of the people using and working on Linux.
What Linux provides me with is (a) a well-integrated package manager containing fully free/libre software, (b) a comprehensible system (where I can understand fully how each part works), and (c) a modifiable system (where I can change how the system operates to the extent I want). Having a perfect replacement of the MS Paint application is not even on my radar. But that said:
> Plus "paint" is a near-instant loading app that is several MB in size whereas Pinta is installing loads of supporting libraries because it is a more complex application.
Maybe you're exaggerating, but on my system Pinta has an installed size of only 2.88 MiB and has only two direct dependencies. Maybe you're thinking of the fact that it's written in Mono (the C# runtime), but that's a shared installation with all other Mono applications. It's equivalent to Windows shipping with the .NET runtime or UWP.
> But isn't this much, much easier if you just piggyback on the Apple hardware ?
Sure, it's easier, but then I'm not sure what the point is or what makes it one of the biggest opportunities of our time.
I also understand why it never happened - there is already a unix-based OS which is designed with perfect compatibility with the Apple Hardware called OSX! I'm not sure what the advantage to a consumer would be for replacing OSX with linux - other than the fact that it gives consumers choice - but of course providing a distro that only operates on a specific Mac is then limiting hardware choice so it doesn't really solve that in some respects.
And if it's just for developers, then wouldn't developers want some choice of hardware, good support for tooling, the ability to test native apps without virtualisation e.t.c.
IMO I suspect the Venn-diagram of developers who:
* want a Mac but don't want OSX
* don't mind that they can't upgrade their hardware
* are willing to run some totally-new operating system
* Accept that it will initially lack the support of the runtimes they use, and some software, and won't be able to develop certain types of software because of this.
* Accept that if they wish to continue using the OS for their next laptop they will be fully locked-in to a single hardware model.
In short, Apple does things in non-standard ways without explaining how to get another OS to work.
Apple doesn’t prioritize lack of binary blobs. The EFI firmware is all proprietary. All their Wi-Fi have been switched to Broadcom.
They do weird non-standard things to the Thunderbolt controller, e.g., you have to lie to the firmware and claim to be macOS in order for it not to disable the Thunderbolt controller.
Newer MacBooks hide a bunch of hardware behind the proprietary T2, and whatever embedded OS runs the Touch Bar.
MacBooks are not ideologically pure, and sunk efforts to get an OS working on other machines are often wasted on MacBooks because Apple does things in bizarrely different ways.
I only ever need a laptop when traveling, I have a big desktop setup at home. I plan to take my Steam Deck traveling with a portable monitor and keyboard.
I like the idea, but I worry that Apple's m.o. is to allow something like this in the margins and then cut it off at the knees if it becomes too successful. Whether by altering their hardware, using security lock-out (à la iPhone), or replicating it without acknowledging where it came from.
> But isn't this much, much easier if you just piggyback on the Apple hardware ?
Would that be even legal? I mean selling a commercial OS that would be marketed to install as a replacement OS on the most locked, most proprietary hardware on the market?
Didn't say it would be quick. Sure, it was a long cycle for them, but the ecosystem is much further along now. I think it's to bring to market in the hundreds of millions.
The problem with this is which developers? People who write embedded systems? Web developers? People who write custom Windows applications?
Any given developer subset is likely to find this hypothetical new developer computer to be either too complex to use or not differentiated enough from Windows or MacOs (or ChromeOS).
> Most of the software we use to create software are electron apps
This is not true for most people whose primary employment is writing software, or working on software teams. Most people who get paid to write code work primarily in either the Java or .NET ecosystems and use something like Eclipse, IntelliJ, or Visual Studio. (Many more are using niche-specific tools in a captive platform like Oracle, SalesForce, SAP, etc.) If the new platform doesn't have 100.0% binary compatibility with legacy tools written for Windows and/or MacOS, its addressable market shrinks substantially.
I would also argue that focusing on developers too much is actually a loss for users. Developer productivity above all is how we've ended up with resource hogs like Chrome and Electron as well as never-ending erosion of customizability in software as well as the user's level of control and privacy.
It's critical to have a great developer story yes, but to make a stellar platform that needs to be balanced with a great user story, and that means developers might not always get everything they want down to the letter.
Agree to disagree. Java is inherently cross platform. Jetbrains stuff runs on any linux platform, as does vscode and any of the modern development workflow stuff. The development stack is steadily moving away from native apps. Vscode is essentially a webapp, and indeed it can run be run as one.
VSCode is not the entire stack needed to build Windows desktop applications. There are tens or hundreds of thousands of developers who build applications for the Windows desktop. I'm not (for the most part) a .NET dev, but my current understanding is that only Visual Studio running on Windows is a first-class citizen with the ability to access all parts of the dev stack. The Windows dev stack doesn't need to move away from native Windows applications any more than does Xcode need to move away from MacOS.
>One of the biggest opportunities in technology right now is to re-run the Next/Apple playbook and create a new hardware/software technology company focused on developers.
Why would that be an opportunity though?
It would be a low margin niche, with a small market segment, of which most would stick with Apple/Lenovo/Dell.
And only a tiny fraction of them would care. I’m quite happy with my iPhone and don’t want a random HN freedom phone. I don’t ever even intend to develop something for my phone and if I did, I would want to target android and iOS.
Software ecosystems have a major chicken and egg problem.
If you wanted to create a new platform, your best option would be to go the other way and make an OS that was “just electron” and ran all the electron apps in the world faster and better than anything else. Unsurprisingly Google has tried this with Chromebooks, but their track record on consumer product development is so poor that perhaps they just didn’t execute well and someone else could pull it off.
Another challenge is that if you did that you gain wide software compatibility but you lose any obvious differentiator. The likely way to win would be if you could make a laptop that was “just as good at running web apps as your Mac, with just as much battery life and just as nice hardware” but somehow cost under $400 or so.
I actually wouldn’t be surprised if we see that coming out of Chinese OEMs in the next decade.
That would be the trick. Perhaps a screen manufacturer will realize they could make better margin if they built just enough “netbook” around the screen to sell it as a computer, or perhaps screens will just get cheaper until at some point a “good enough” netbook screen is very cheap.
Put Kubuntu on any modern Dell desktop or Thinkpad laptop. In 95% of the cases enjoy total hardware compatibility right out of the box, and a UI that will pass for "The next Windows" for most people whose needs do not exceed browsing the internet and Facebook properties.
If they need a photo manager, in my experience the most common application need after a web browser, then Digikam really cannot be beat.
While hardware may work, there are so many niceties people will have to learn to deal with. Plugging in an external monitor may or may not work. Audio may or may not switch like folks are used to. (Try telling someone to launch alsa mixer) Want to use bluetooth? It might work. If it doesn't, you're going to be messing with things deeper than a "Facebook/Internet user" wants to deal with.
I am a full time Linux user. And I'll probably support anyone who wants to try it until the day I die. I absolutely love it. But we still can't enjoy some of the simplest use cases without screwing around with configs and in some cases, writing scripts that listen to DBus or udev.... So every time I hear someone say, "just use Linux" I think... nah, just buy a Chromebook (and - yeah, use Linux). If your needs are any more than that, Linux might not be for you.
+1. I used to run Linux on my primary laptop. I'm a fairly competent Linux admin. I just got really tired of being forced to be a fairly competent admin so much of the time when I was trying to just get something done.
> writing scripts that listen to DBus or udev
Exactly. Not to mention the preceding step of spending 30 minutes in forums to find someone else who has had this problem on a system with exactly the same motherboard so you don't try the things that didn't work for them.
I'm still 100% Linux on the server, headless Linux doesn't have nearly the warts as GUI Linux.
What you're describing certainly doesn't describe my experience of the last ten years or so with Linux. Sure, there are some things that won't work with Linux by choice of the manufacturer/developer, but external monitors, audio, and Bluetooth generally just work with a Dell laptop. I've been using Dell laptops in a variety of scenarios for years and that sort of stuff doesn't even cross my mind any longer.
Reasons like these are why I switched to using virtualised Linux for most of my development, trying to run it natively on hardware works great 95%+ of the time, but that last 5% is usually tricky to fix without delving into years old mailing lists. And even if you can get things working, there's still often serious quality of life problems like the bluetooth stack randomly failing 5+ times a day
> Plugging in an external monitor may or may not work. Audio may or may not switch like folks are used to.
> Want to use bluetooth? It might work.
Sounds like the problem of installing Linux on a hardware designed for Windows. All those things work flawlessly on my Purism Librem 15, which came with preinstalled Linux. (Ok, I did not try Bluetooth, but saw reports that it works.)
A single Gladwell of anecdata: I've got a thermal printer which talks Bluetooth - works flawlessly when sending data from my Macbook; outputs garbage when sending the exact same data from a Raspberry Pi 3+ (which is definitely Linux on hardware designed for Linux.)
100%. That's what Apple got right then and one of the reasons why Linux has never been able to really penetrate the desktop or professional desktop market. Otherwise you're constantly debugging things that should Just Work like external displays, random device drivers, etc.
> Plugging in an external monitor may or may not work
Sadly, this is now true for Macs as well. Were by "not work" I mean: not finding/not supporting the proper resolution and/or refresh rate for the display.
I had a similar experience related to this in the past couple of months. My company issued me a Macbook, and forced an upgrade to Big Sur recently. I'd been working through the pandemic by plugging the laptop into a HDMI monitor using a USB-C-to-HDMI adapter from Anker (purchased on Amazon: https://www.amazon.com/dp/B07THJGZ9Z/). After updating to Big Sur, MacOS refused to recognize the Anker adapter with a notification "USB Accessories Disabled", and a note that it was using too much power. I did a few hours of research trying to figure out what the power draw actually was (with no monitor plugged in) and scoured specifications for other adapters in the hope that I could identify one before purchasing that might work, but found scant information published on the power draw of various adapters.
I never succeeded. I just use the Macbook with only the built-in display now.
I'm on a Dell XPS 13 with Ubuntu right now. People some times give examples of things like external monitors not working for them on Linux.
Here is one. Randomly, based on no relevant input from me or changes in the laptop's state, my network connection dropped and the Network Manager UI was telling me no network adaptor could be detected.
Some fumbling around in the Terminal (including various reboots not solving the issue), and managed to enable the wireless adaptor which apparently could be detected and connect to my network, though at the same time the UI was telling me in no uncertain terms that no wireless adaptor was connected to my laptop.
Then later, again randomly, based on no relevant input from me or changes in the laptop's state, the UI agrees there is a wireless adaptor connected after all. This is on a machine currently in near-factory state with certified compatible Ubuntu preinstalled.
I share this example because one can at least comprehend why random monitors or graphics cards or what not do not cooperate without fiddling, can comprehend certain apps failing and crashing, can comprehend other unusual bugs. The UI thinking and acting like there is not a network card for no reason whatsoever, on the other hand, is completely illegible to even competent users.
Someone needs to just commercialise a proprietary and at least initially closed version of Linux (so as to to turn a profit) with good design principles in mind and deal with lawsuits and license issues later. There is plenty of money in it.
What about the barrier to entry is insurpassable? Apple hardware in about a decade went from being something every developer I know raved about and loved to being something everyone complains about and is generally unhappy about. Apple went from being a computer company to a consumer electronics company. The more they expand into other consumer verticals (tablets, headphones, CARS), the more the computer products suffer.
I'd argue they're popular because no better alternative has been created (yet).
My point with the software is that most software people use to create software/apps is increasingly created with web technologies / electron apps, so the native app ecosystem on a desktop machine is increasingly weakening as a moat.
I'd say the #1 reason is that most developers (myself included) want the intersection of:
- Unix-like, developing on Windows is pure torture
- Don't waste my time with configurations, drivers and other crap, I want a machine that I can be productive with out of the box. Take my money if you have to, but I don't want to edit Xorg.conf ever again.
You can talk about polish all day, but no machine that doesn't satisfy both is even close to appealing for a majority of developers in my experience.
You are right about the native app ecosystem being less of a blocker, but that's in line with my point.
What are you talking about? Any large tech company could pull that off.
The problem is those large companies are not interested in catering to a million developers, they are interested in catering to a billion people.
I do. I remember Microsoft stores too. They went 20% of the way and stopped, and decided that extracting rent from Office and Azure was all the work they wanted to do, rather than continue investing in hardware and in person support. Same with Google and their devices, except they did not even bother with in person support.
Depends if I wanted to compete with Apple or not. I would have spent whatever it took. The board members at these companies obviously decided that cash now was more important than competing with Apple.
>A sunk cost fallacy is only applicable if the assumption is the venture would result in failure.
The sunk cost fallacy is not really about what the venture would actually do. One could be said to have fallen prey to it even if they double down and the venture eventually succeeds.
What's important is that at the time of the decision (a) the path doesn't seem to be working, and (b) they think "but I've spend too much to quit now".
This is more likely when one assumes it can still has a chance to succeed, than if they assume it will inevitably result in failure. Nobody that assumes inevitable failure would decide to continue.
I did not feel it needed to be specified since it is a trivial fact that nothing in life is certain. But Apple's product offering is a top to bottom customer experience involving in person help at stores around the country. Microsoft must have acknowledged that, since they went as far as opening stores and coming out with that line of non malware Windows products (as a side note, it is ridiculous that Microsoft even let their ecosystem get to that point). Which, yes, they might have had empty stores, but that is because they failed to continue investing in their mobile products, or even non mobile products. They would have had empty stores for 10+ years while they slowly build it all up, just like Apple had to.
All I know is at this point Microsoft had two options: continue investing into creating an alternative to Apple, or cancel their plans and sit back and let the Office/Azure revenue flow in.
Maybe it was a long shot, maybe they decided the size of Apple's customer base divided by two was not enough to satiate them, but whatever the case, they signaled that they do not have the talent/gumption/appetite for risk to pull it off. But if any company did have the opportunity to go for it, I would think Microsoft (and Google) with their income stream would have been in position to do it.
Both companies seem to dip their toes, but never follow through.
As a Mac guy I thought that the Windows Phone wasn't a bad device. My buddy had one and it had all sorts of great features but they all worked within the Windows/x-Box universe he was in.
I would have liked to see it succeed if only for there to be more competition.
Yep the phones were actually in terms of OS and hardware equal or better than iOS/Android. However even back then, the app ecosystem hurdle was already insurmountable.
This just proves it’s even harder. Even if by some insane luck you manage to build something that is very good, the general public still doesn’t want it.
I'm pretty sure it could have worked, their phones were getting traction in Europe, if they did not do the big framework screwup which destroyed their developer base, it would have worked.
Have you looked into Purism (https://puri.sm)? Purism makes its own hardware such as the Librem line of smartphones and laptops, and maintains its own Linux distribution called PureOS. Purism also funds the development of apps (https://puri.sm/fund-your-app/).
The physical kill switches work as described, but yes, the phone is "too good to be true."
It will not (yet) replace your iPhone, although you can get pretty far on your own if you don't mind SSHing into such a device and messing with stuff on your own.
>One of the biggest opportunities in technology right now is to re-run the Next/Apple playbook and create a new hardware/software technology company focused on developers.
If it was focused on developers, it would by definition not be re-running the Apple playbook. Developers are much too small a market to justify that level of investment.
i think a lot about this, these days, to be honest. re-run the next/apple playbook? no. build an integrated hardware and software company that fixes personal computing through ground up rebuilds of everything from processors to operating systems and programming languages to solve the annoyances of security, privacy, software bloat and the generally humdrum nature of new technologies being shipped these days?
maybe.
it would be amazing to build systems that are are so beautiful that they inspire people like berners-lee and carmack to do their things.
Fuchsia is really the only new OS under development that I’m aware of. And it’s not even clear if Google intends to ship it outside of niche embedded use cases like Nest products etc