For modern applications programming, arbitrary-precision is probably the right way to do integer arithmetic. Python does that, and it doesn't seem to cause any trouble. The people who need to do massive amounts of numeric stuff know who they are and can take the time to learn the relevant arcana, but your typical cat pictures app never has to worry about how big any of its integers are.
So do Ruby or Erlang. The problem, of course, is that it has a cost: trivial arithmetic operations have to be checked and may need to allocate.
And operational coverage can be spotty outside of the trivial range e.g. when you give a bignum to an "integer" operation going through FPN, bad things can happen as said FPN are generally machine doubles (fp64)
If Python handled overflow with an exception instead of a bignum, I'd bet it still wouldn't cause trouble either. The actual values generally aren't reaching bignums.