I find one of the biggest mistakes programmers have about mathematical notation is that it's somehow just a terse, badly implemented programming language. But this is a very poor understanding of what mathematical notation is doing.
I think this error in thinking comes from the fact that Sigma notation can often be trivially implemented as a for loop.
Programming languages are designed to describe a specific computation, whereas mathematical notation is typically trying to describe an idea (one that might not even have a implementation!) Notation only sometimes and coincidentally describes computation as well.
The ambiguity, implied variables etc are an essential part of mathematical notation in the same way it is in common spoken language. Mathematical notation exists to help abstract and work out very hairy ideas, and often that ambiguity is necessary to show connections.
> code should be optimized for readability, not writtability
Mathematical notation is readable if you're literate in it. It takes lots of practice to become fluent in it, but once you become more familiar it's much easier to read than text (which is why it's used in the first place). Mathematical notation is an extension of mathematical writing, not computational implementation.
Reading mathematical notation is much closer to reading poetry than reading code.
> I find one of the biggest mistakes programmers have about mathematical notation is that it's somehow just a terse, badly implemented programming language. But this is a very poor understanding of what mathematical notation is doing.
No, we think that because proofs and programs are isomorphic[1]. It's not a mistake: traditional mathematical notation provably is a terse badly implemented programming language. Actually it's worse than that, because oftentimes it doesn't even parse. Now I'm not going to say I can't on some level see the appeal. After all I think Perl is a lot of fun to code in.
Naturally, its adherents are practiced at making a virtue out of its defects. Who wants to admit they dedicated considerable brainpower to doing something in a fundamentally suboptimal way? That doesn't really matter though. As Mathematica and other tooling shows, the formalists have already won and now it's just a matter of mopping up the stragglers, or waiting for them to age out. This isn't terribly surprising to those who know the basics of the history of mathematics. It took something on the order of two centuries before Recorde's innovation of the equal sign was generally accepted.
I get the feeling that you're stuck in Terry Tao's 'rigourous phase' of mathematical understanding, where everything in the end is a computation and has to be carried out according to a set of rigorous steps and definitions.
I get that, but it does miss a bit the cultural context of how mathematically fluent people use mathematics to communicate with each other. When you're discussing maths with colleagues in front of a blackboard, you're often not really trying to prove anything, but discussing the relationship between mathematical objects. In this context the ambiguity and implication in the notation is almost a requirement, otherwise the communication speed tanks.
Having a mathematical discussion between a group of people all fluent in the context and terminology is a wonderfully fluid thing.
Complete proofs and programs are isomorphic. Many proofs while incomplete are perfectly legible to experienced practitioners who can fill in the details without getting bogged down in trite steps. Morevoer, isomorphisms are simply provable facts between two types of objects. Most isomorphisms are generally used to convert one object into another object which is more amenable to being used in a given proof. An isomorphism does not beg the conversion and there are many trivial isomorphisms of limited use.
As a mathematician who’s only recently started to get into computation and programming, I think the difference between my thought patterns when switching hats is so fascinating.
I was so accustomed to hearing that mathematics is nothing if not rigorous, but the more I reflect, mathematics is much more dependent on social convention and agreement amongst a community. While an outsider might think that proofs rigorously establish theorems, the purpose of a proof might be better seen as having enough detail to convince a substantial portion of the prominent mathematicians in a field that the proof is correct. In fact, there are theorems (e.g. the ABC conjecture) where a “proof” has been proposed, but not enough mathematicians have expertise with the techniques used to prove it in order to agree whether the proof is sufficient or not (though I’ve heard that the general opinion is that the proof does not suffice). William Thurston wrote one of my favorite essays related to this topic: https://www.math.toronto.edu/mccann/199/thurston.pdf
Reflecting on my own experience in mathematics, a better way to think of proofs is as being composed of “thought patterns” which many mathematicians agree are likely to be correct - when I scan a proof, I don’t look through every detail to verify that it is correct, but rather run it through a series of high level tests to see if it fails in any way, then if it passes all of those I look more closely at the argument and analyze the structure and mathematical power of each statement (e.g. one is unlikely to establish a hard analytic result through purely algebraic means, so where is the magic going on?) and so on until I’ve convinced myself that the argument is probably true. Other times, the result may be “visually apparent” (e.g. in geometry) at which point it might be sufficient for me to just to connect certain canonical arguments with the pictures as I read through the proof. For an excellent overview of this process, read Terry Tao’s blog on identifying errors in proofs : https://terrytao.wordpress.com/advice-on-writing-papers/on-l....
I don’t feel as confident commenting on the programming/computational perspective, as I’ve probably developed a very idiosyncratic way of thinking from approaching the topic so late in my education, but my feeling is that they are much different, and that the types of things a mathematician wants to convey to another mathematician rely much more on “trust” rather than the kind of rigor that might be needed by a computer.
I think this would be an interesting topic to explore in longer form.
I think this error in thinking comes from the fact that Sigma notation can often be trivially implemented as a for loop.
Programming languages are designed to describe a specific computation, whereas mathematical notation is typically trying to describe an idea (one that might not even have a implementation!) Notation only sometimes and coincidentally describes computation as well.
The ambiguity, implied variables etc are an essential part of mathematical notation in the same way it is in common spoken language. Mathematical notation exists to help abstract and work out very hairy ideas, and often that ambiguity is necessary to show connections.
> code should be optimized for readability, not writtability
Mathematical notation is readable if you're literate in it. It takes lots of practice to become fluent in it, but once you become more familiar it's much easier to read than text (which is why it's used in the first place). Mathematical notation is an extension of mathematical writing, not computational implementation.
Reading mathematical notation is much closer to reading poetry than reading code.