I think taking logs is an unnecessary indirection in the given proof. If n^m = m^n then raising both sides to the power of 1/(nm) gives us n^(1/n) = m^(1/m). So we are looking for two distinct positive integers at which the function x ↦ x^(1/x) takes the same value. The rest of the proof then goes as before.
Differentiating the above function yields (1/x^2)(1-log(x))x^(1/x), which is positive when log(x) < 1 and negative when log(x) > 1. So the function has a maximum at e and decreases on either side of it. Therefore one of our integers must be less than e, and the other greater than it. For the smaller integer there are only two possibilities, 1 and 2. Using 1 doesn't give a solution since the equation x^(1/x) = 1 only has the solution 1. So the only remaining possibility for the smaller number is 2, which does yield the solution 2^4 = 4^2. Since x^(1/x) is strictly decreasing when x > e, there can't be any other solutions with the same value.
Logs will appear no matter what, at some point in the line of reasoning. They are only held back until differentiating in this approach. I think the expression for the derivative of logn/n was much nicer to grapple with.
Seeing as we're in the integer world, I'm not sure logs need to show up ever, there's probably a proof path around unique prime factorisations. E.g. it's more or less immediately obvious that n and m would need to share the same prime factors in the same ratios.
The key thing you need is that for n >= 3 the sequence of (n^1/n) is decreasing. You can get that pretty easily without logs.
Look at two consecutive terms: n^(1/n) and (n+1)^[1/(n+1)]. The ratio of the first to the second -- we want to prove that this is >1 -- is n^(1/n) / (n+1)^[1/(n+1)]. This is bigger than 1 iff its n(n+1)th power is; that is, iff n^(n+1) / (n+1)^n > 1. We can write this as n (n/(n+1))^n; so what we need is that (n/(n+1))^n > 1/n.
(Handwavily: we know that the LHS is about 1/e, so for n>=3 this should be good. But we want a proof and we're trying to do it without nontrivial analytic machinery.)
Actually, I prefer to write this as ((n+1)/n)^n < n, or (1+1/n)^n < n. Expand this with the binomial theorem: we get sum {0<=k<=n} of (n choose k) n^-k. And we have (n choose k) = n(n-1)...(n-k+1) / k! < n^k/k! so this is strictly less than sum {0<=k<=n} of 1/k!. And for k>0 we easily have k! >= 2^(k-1) so this sum is no bigger than 1 + 1 + 1/2 + 1/4 + 1/8 + etc. = 3.
So, the function on integers n -> n^1/n is decreasing for n >= 3. Now the proof goes through as before.
(Maybe there's a more thoroughly number-theory-ish way to do it by looking at prime factorizations, but when I try it that way it always seems to end up rather a mess.)
[EDITED to add:] But elsewhere in the discussion users bustermellotron and diffeomorphism give (very similar) neat number-theory-ish proofs, either of which is definitely a better proof than the one using the calculations above.
This back-and-forth makes me wonder, is it possible to write proofs about proofs? e.g., you cannot prove this result without logarithms or there must exist a proof that involves prime factors?
I suppose at least some proofs vaguely like that are possible because Gödel's incompleteness theorem is one, although I suspect that that same theorem puts some constraints on these "metaproofs."
Yep, I took several metamathematics classes at UCLA! Mostly on proof theory, but also analyzing metamathematical results (like Godel's theorems, Henkin construction, and so on).
Agreed! I'm realizing that my comment came across as negative, but I did appreciate seeing a second path to the same place. I also agree that the expression x^(1/x) feels like a more natural place to start.
You often see this I think, in "pretty" proofs compared with the more direct approach. A clever early step or some bit of startling insight.
It is a sort of automatic reflex when encountering stuff like $x^x$ in such contexts like here in mathematics to take logarithms. Working with logarithms is usually much simpler and easier to understand when having variables in both the exponent and the base, both for technical reasons (the logarithms will not be avoided anyway, as the other commenter said) and for intuition-gaining reasons. Multiplication of two functions is more intuitively understood (as one can work stuff such as signs, monotonicity etc easily) than exponentiation involving one function in the base and one in the exponent.
Plus this approach (IMO) visualised better than the proposed alternative.
I think it's arguable that OP's approach is cleaner for some value of clean, but the article's approach gave me happy flashbacks to doing number theory in undergrad that OP's approach didn't.
When communicating to the wider world, aesthetics do matter, and yeah, everything you said as well.
Differentiating the above function yields (1/x^2)(1-log(x))x^(1/x), which is positive when log(x) < 1 and negative when log(x) > 1. So the function has a maximum at e and decreases on either side of it. Therefore one of our integers must be less than e, and the other greater than it. For the smaller integer there are only two possibilities, 1 and 2. Using 1 doesn't give a solution since the equation x^(1/x) = 1 only has the solution 1. So the only remaining possibility for the smaller number is 2, which does yield the solution 2^4 = 4^2. Since x^(1/x) is strictly decreasing when x > e, there can't be any other solutions with the same value.