The lack of understanding is amazingly widespread. I often have to explain to people that when they look at their CPU utilization and it is at 10% it means "you are throwing money way", not "you are efficient".
That's not really true though, or at least not on all workloads: much as you are not "throwing money away" by not pegging your car engine in the red zone 100% of the time, you're not throwing money away by not being at 100% CPU all the time, there are other metrics, values and issues to take in account e.g. a pegged CPU but an unresponsive computer is useless for a desktop, a pegged CPU which can't serve requests because the CPU is pegged because it's swapping like mad is useless for a server, so is a server at 100% CPU when there's no load on it which will just keel over when people start trying to actually interact with it.
It sounds like you missed the point here. If an eight-core server is at 10% utilization, it effective has a single processor nearly pegged and the process doing it is thus CPU bound (and maybe serving responses at a high latency) while you have other cores sitting idle. Conserving CPU resources and running under capacity is wise, but has nothing at all to do with this comment.
It really depends why you are at 10%. A file server will probably spend the vast majority of its time waiting on I/O... That's not necessarily a bad thing.