Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Doing exactly the opposite is also a right approach:

Absorb as many features as possible, so that more people will be gravitated into a centralized place. For bugs, if the original feature submitter isn't around, potential new maintainers will emerge and fix them.



This only works for few high profile projects.

Most projects developed this way die of development resources starvation. Once you have added all those features you will face all the consequences of developing a complex system -- difficulty to get anything done, new devs requiring substantial amount of time to get started, people getting discouraged quickly, etc. As you add complexity, the ways users use your platform will grow exponentially -- potentially causing an explosion of bugs.

I don't mean users using your platform in many different ways is bad, the question is "Can you support it?"

If you run into developer resource starvation users will find that you are making no progress and will find an alternative and this will make your project even less enticing for contributors. This is how a project dies.

If you are single developer and want to do something useful for community your best shot is to make something simple and add things judiciously.

So how do large open source projects survive and how are they different? The issue is that when it comes to contributors, relatively few large projects receive most of the contributions. Everybody wants to contribute to something well known, high profile, used by a lot of people. Not many developers want to make it their mission to support a useful but an obscure plugin to something else.


I'd say precedent is against you here. There's many examples of widely used open source projects that don't get lots of free maintenance effort.

Maintaining something well is hard and requires commitment, you can't just crowd source it with people dipping in and out with the occasional fix and new feature. Being careful about what PRs you accept, especially when it's a big new feature is crucial to the health of an open source project and the mental health of its maintainers.


Making this will exponentially increase the glue code required to tie everything together.

So, while new developers fix the features, they'll also need to fix that glue layer too, making the work twice or thrice as big. Also things will also even get more complicated as long as things get added into the mix.

This is why UNIX philosophy is very important. Keeps software simple, compact and much more easier to maintain.


It can work but you have to have a vision on how to integrate extra features, not just shove them into the codebase.

Consistent plugin API allowing to plug-in code in various parts of the app takes a bunch of effort but now you're magically separated by a wall of code from the contributors, as you're never a blocker for someone's else effort to add features, and the stuff you don't want to maintain can land in contrib repo


For all the negative responses you get, there are a bunch of open source projects that work kinda like this.

Lazarus, and to a lesser extent Free Pascal, are basically like that. New features are added often and bug fixes to code written by others who aren't around (or don't have time) are submitted all the time (e.g. personally i'm more of a user than a developer of the project but i always use the git version built from source so that i can quickly fix any bugs that annoy me). The codebase is also far from small and the project is an IDE that tries to do "everything", so far from "doing one thing".

Lazarus has been around for at least a couple of decades, so i think that shows projects do not necessarily die when doing that.

It might have to do with not having some big company push their weight around so it is largely community developed and driven. Also it is by far the most popular IDE for the language it is written on, so perhaps it is easier to find contributors.


>Projects should do exactly the opposite:

This may work as a strategy for a VC-backed startup, where you just throw money into resources to create the largest possible gravity for your product, and solve the technical debt by growing your team once you have a critical (paying) userbase.

But how would this work when there's no money to throw around, your product is free and every contributor is actually a volunteer who still needs to earn his living elsewhere?


That’s very idealistic and unfortunately life is far from ideal. Its better to do one thing very well than to do many things badly.


What you're describing is (imo) more a framework than a library, then; think Angular, which comes with loads of modules attached or standardized.


The other factor to consider is how features complicate unrelated maintenance and new development. If you accept new features which are affect other parts of the codebase, you might find that the things you are finding volunteers to work are made harder due to something most people don’t use and thus aren’t jumping to spend time supporting.

Years back I worked on a search engine abstraction library for Django. I’m not sure the concept is affordable at all - search engines are less alike than SQL databases — but one thing we constantly had problems with was that most volunteers only needed one backend but any new abstraction would need to be implemented for at least 3.


Interop is better. Keep things focused and simple, and focus on interop between tools.


Kinda depends on project. Accepting features willy nilly can make a mess both ou of codebase and project direction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: