You are implicitly talking about being able to objectively measure a goal's "optimality".
Unless you believe in absolute morality or the like, there's no such thing as an objective measure. A goal can only be optimal to an agent.
In your example, the fact that we pursue science and can destroy ants doesn't mean that their goal is "objectively less optimal". Their goal is absolutely optimal to them, though they can't reach it if it collides with ours.
Unless you believe in absolute morality or the like, there's no such thing as an objective measure. A goal can only be optimal to an agent.
In your example, the fact that we pursue science and can destroy ants doesn't mean that their goal is "objectively less optimal". Their goal is absolutely optimal to them, though they can't reach it if it collides with ours.
Some goes for a superintelligent AI.