I'm on a data team of 10, and after about 6 months of onboarding we have permissions to push directly to main pending passing tests. We generally do this for small changes, or changes where we are the context owner. With the caveat that reviews happen after the code is deployed, and usually within a few days.
Personally, I like the process. It allows us to move quickly, and focus on blocking changes. We can still get reviews prior to pushing code if it is sensible (for large changes), but most (80%?) changes tend to be quite small.
That's an intriguing approach.
It makes sense that not all changes are equal and they don't require the same review process. (And it certainly makes sense to encourage small changes.)
What qualifies as a "small change"? Do you have some numbers to measure it? Or is it the developer's call?
Personally, I like the process. It allows us to move quickly, and focus on blocking changes. We can still get reviews prior to pushing code if it is sensible (for large changes), but most (80%?) changes tend to be quite small.