So write up a grammar, feed it to a parser generator like GOLD, run result through a premade parser engine, and BOOM your done. Then, you can start on the stuff in the middle that really needs more people working on it.
The problem is that writing a good grammar requires understanding of parsers.
(Also, real parsers tend to be hand-written, judging from open source language implementations (some have actually migrated from generated parsers to recursive descent in the past). There must be a reason).
The reason basically boils down to generated parsers being very brittle. If you need to tweak it slightly (for example, to handle errors on missing semicolons better), it's difficult or maybe even impossible to get the generated parser to do it well.
Alarm bells ring in my head whenever someone claims "it's a solved problem" in the context of an engineering (as opposed to mathsy) problem. Unless the context is really narrow (and the problem becomes basically a math problem) that claim is usually wrong.
Those of us who care about syntax (that is optimized for the user, not the programmer), will have to keep thinking about parsers for the foreseeable future.
PEG is basically a thin abstraction over recursive descent parsing. Everything you can do in a handwritten parser can be expressed mostly declaratively in PEG.
And, no, there is no single reason not to use PEG. Especially when the choice is between a handwritten parser and a PEG. There is a lot of stupid FUD about PEGs, you should ignore it. Your links are pointing to the ignorant, incompetent mumbling of those who never tried implementing a proper PEG.
And I never met a language for which writing a PEG parser was not totally trivial. A pro tip: use PEG alongside with a Pratt for the binary expressions.