Seems to me that dropping the terms of a Taylor expansion could have wide-ranging consequences to the coherence of an artificial mind, making this approach infeasible.
In general you don't actually need crazy precision to train the nets, and a small number of Taylor expansion terms tends to approximate functions fairly well anyway.