I've been doing dev work for 20+ odd years now, and I suppose I can relate to many of these realizations. A young developer's framework is full of sharp polemics that soften over time as we scrape against them through experience. It's definitely something I look for in junior devs - a few strongly held opinions and a willingness to back them up by referring to some coherent "moral" framework - as long as this does not descend into technical dogma or dismissal of other perspectives.
If I were to step back and try to characterize the growth in my understanding of software development in a single general statement, putting aside all the hard-tack technical experience, it's this:
I am beginning to understand the subtle and complex delineation between _good_ and _useful_, and the role that execution plays in that - with all it's myriad parts: prioritization, management, technical risk mitigation and hypothesis validation, consensus building on technical direction, post-hoc validation, etc. etc.
And usefulness has a lot of facets in execution: not just technical but social. A good project and good idea might come to nothing if you are oblivious to obtaining some organizational mandate for it, and it gets sidelined due to shifts in priorities. If this happens, it means a misstep was made earlier: either the project should never have been started at all - due to awareness of upcoming priority changes, or you should have done the organizational consensus building to ensure that the work had the runway it needed to complete.
The same goes for technical consensus among implementors. Sometimes this can be avoided by giving clear mandates to trusted individual leads, but come complex projects really need the input of multiple senior members.
I'm starting to find lately that many of the contributions I'm most proud of are the ones where I come to firm conclusions on what work _not_ to do: concluding that certain tasks that were good but not useful enough, or determining ahead of time that certain planned implementation paths are actually not going to deliver what we might have expected them to, and thus we should scrap that idea. It has saved inordinate amounts of time that would _otherwise_ have been wasted.
The challenge for me has been reconciling this new understanding with my old methods for evaluating my own performance. I know these days, within reasonable bounds of arrogance, how to write decently complex software and understand it. As I move on to considering these higher level concerns, I find that I'm asking myself whether that opportunity cost is worth it: "I _could_ be spending time writing good code right now, instead of analysis and reports and meetings and planning. Is this new activity of mine _useful_?"
I'm still building my internal model for measuring my effectiveness in this new domain.. but it's clear that it's a profoundly impactful and worthwhile multi-factor optimization problem to tackle.
If I were to step back and try to characterize the growth in my understanding of software development in a single general statement, putting aside all the hard-tack technical experience, it's this:
I am beginning to understand the subtle and complex delineation between _good_ and _useful_, and the role that execution plays in that - with all it's myriad parts: prioritization, management, technical risk mitigation and hypothesis validation, consensus building on technical direction, post-hoc validation, etc. etc.
And usefulness has a lot of facets in execution: not just technical but social. A good project and good idea might come to nothing if you are oblivious to obtaining some organizational mandate for it, and it gets sidelined due to shifts in priorities. If this happens, it means a misstep was made earlier: either the project should never have been started at all - due to awareness of upcoming priority changes, or you should have done the organizational consensus building to ensure that the work had the runway it needed to complete.
The same goes for technical consensus among implementors. Sometimes this can be avoided by giving clear mandates to trusted individual leads, but come complex projects really need the input of multiple senior members.
I'm starting to find lately that many of the contributions I'm most proud of are the ones where I come to firm conclusions on what work _not_ to do: concluding that certain tasks that were good but not useful enough, or determining ahead of time that certain planned implementation paths are actually not going to deliver what we might have expected them to, and thus we should scrap that idea. It has saved inordinate amounts of time that would _otherwise_ have been wasted.
The challenge for me has been reconciling this new understanding with my old methods for evaluating my own performance. I know these days, within reasonable bounds of arrogance, how to write decently complex software and understand it. As I move on to considering these higher level concerns, I find that I'm asking myself whether that opportunity cost is worth it: "I _could_ be spending time writing good code right now, instead of analysis and reports and meetings and planning. Is this new activity of mine _useful_?"
I'm still building my internal model for measuring my effectiveness in this new domain.. but it's clear that it's a profoundly impactful and worthwhile multi-factor optimization problem to tackle.