I disagree with this. Being able to read and understand the math in the paper of the algorithm you are implementing is useful, and calculus is common in ML papers.
You may have some short term success without understanding the algorithms at all, but as the field changes and you are no longer in school, being able to keep up at least somewhat with papers is very useful.
I agree that the day to day is mostly about formatting data though!
Well I took calculus, and I did a Master's thesis (much smaller than Ph.D., obviously) on neural networks, and I didn't find much application. Plus, the vast majority of work on ML is not going to be taking an ML paper and implementing it in code. It's going to be transforming this raw data into a format that the existing ML library can accept as input, selecting which cases are useful for training (e.g. making sure each important sub-case is adequately represented), and other things surrounding the data and how it is fed into the (already existing) ML library. Perhaps also playing with the options of the ML library, as to what kind of model you build.
But 99% of the alterations you can make to an ML library, will not make nearly as much difference as what data you feed into it and how. If it's the right data, many ML models will work, and if it's not the right data, none of them will. But regardless, none of this requires, or even really benefits from, calculus.
I agree with all of this, but ML will evolve a lot over the next 30 years of your career. Being able to read the papers as the field evolves is useful, and many of the papers (especially the ones that shift the field) will assume knowledge of calculus.
You may have some short term success without understanding the algorithms at all, but as the field changes and you are no longer in school, being able to keep up at least somewhat with papers is very useful.
I agree that the day to day is mostly about formatting data though!