I think your quote from Gallian doesn't address syzarian's example, actually.
The two properties you quoted are about the fact the distributivity works whether you're multiplying on the left or on the right. That's one possible left-right confusion, but I would argue most weak students believe it holds even when it doesn't, so in a sense they're too permissive in their reasoning.
However, syzarian's example is about a different left-right confusion: whether you can read an equality both forwards and backwards. I've seen this confusion in students: they will readily believe that you can distribute a factor over a sum (going forward), but be very skeptical about the act of factoring out (going backward), even though it's justified by the same equation. In this case, the students aren't permissive enough in their reasoning.
This is the kind of misconception (that equality has a "direction") that's much easier to suss out with an in-person interaction, whether it's with a teacher or other students.
Oh. Wow, you are right. I honestly just read the definition in Gallian wrong, and then, even as I was typing it, I swear I thought it said "ba + bc = b(a + c)". What a crazy experience of confirmation bias by me.
I obviously agree that a human is extremely effective at explaining that "equals" is symmetric and not the same as "implies". I'm just arguing that the task also can be done in prose to similar, if inferior, effect.
I did agree however that because math has so many gotchas like this, then no textbook will ever disclaim all potential sources of confusion like the example by GGP. And so our discussion boils down to the tradeoff between the price and time investment of taking a class vs. the increased difficulty of self-studying. And how different people assign value differently to each side of the tradeoff, which I claim is the root of our disagreement.
Yes, if "untangled" means "no edge crossings", then 3 dimensions is enough for any graph to be untangled. As a proof, you can put the vertices at coordinates (0, 0, 0), (1, 1, 1), (2, 4, 8), (3, 9, 27), ..., (n, n^2, n^3), and then no two edges will cross.
A different definition of "untangled" might be that all edges have roughly the same length (which could be formally defined in lots of different ways), in which case more dimensions might be helpful for bigger graphs. (With this definition, every graph with n vertices can be untangled in n-1 dimensions, and the complete graph shows that this is a tight bound).
Another generalization is to look at 2-dimensional surfaces of higher genus rather than spaces of higher dimension: something like a donut or a multi-handled donut. There's a whole bunch of research already done on that topic, search for "graph embeddings".
Ah indeed. I was meaning 'untangled' in the looser sense of 'can be spread out over 3D space with most/all connected points reasonably close to each other'.
Seems like this is a topological property, that the 'neighbourhoods' defined by the connections 'tessellate' in some sense.
Note that for this property, you want edges to be reasonably close, but you also need to say that vertices are reasonably far apart: otherwise you could always get a "less tangled" graph by just shrinking it until it's too small to see.
You can place all n vertices within a unit sphere, with a roughly uniform distribution. Then the distance between two vertices is at most 2 and at least roughly around the cubic root of 4π/3n (sphere volume divided by number of vertices). So the maximum factor between edge lengths in that construction is proportional to the cubic root of n. I would suspect that it’s possible to construct graphs where that factor cannot be significantly improved upon.
Unfortunately, constructive vs classical (vs linear, etc.) applies to proofs, but this is really about definitions. Proofs can be correct or incorrect pretty straightforwardly, but definitions being correct or not is really a matter of taste. (And as someone who's been formalising some mathematics in Lean recently, definitions are so much trickier to get right than proofs!)
Yes, those two facts about zero/empty cases (and so many more) are definitely related, and this class of facts is one of my favourites! Usually, if you're dealing with something algebraic in flavour (which is a very vague concept, sorry), there will be a sensible way to define the zero/empty case. This is often a good test of whether you have a uniform concept that works for all n without corner cases.
It almost irritates me when I read a book or a paper and they say that the zero/empty case is "by convention". I almost want to yell, "no! it's because that's how you make the definition uniform!"
Addition is usually defined as a binary operation, a+b, but really it should be defined as an n-ary operation; associativity tells us that doing "two layers" of addition should boil down to doing a single layer of addition on the concatenated list of operands. That forces 0-ary addition to be zero, which can always be added to the list of operands without affecting the result.
Something similar happens with empty products (which explains the factorial), empty spans, etc. In all cases, the trick is to figure out, what is the equivalent of associativity? What "syntactic" operations on the inputs (for example, concatenating a list of lists of operands) correspond to operations on the outputs (you can get the total sum by first computing partial sums)?
A fun puzzle, if you enjoy this kind of thing: what's the determinant of the 0x0 matrix (over your favourite field or ring)? For all (square) sizes, the determinant of the zero matrix is zero, but the determinant of the identity matrix is one, and the 0x0 matrix is kind of both. So which pattern should win? Which one is stronger? I know my own answer ;)
I was also puzzled by det(0x0) being 1, because I had built an intuition that determinant of a matrix was the volume of the parallelepiped represented by the matrix. I made my peace by accepting that my intuition on volume implies that volume is defined in a space that has positive dimensions, and by treating zero space as an algebraic construct.
Now you're reminding me of a wacky math conversation I had at Mathcamp [1] with a much smarter guy, who was talking about more esoteric definitions of volume in euclidean space. Something like:
- n-dimensional volume is a function from (some) subsets of space to real numbers
- it should be additive under union
- it should scale by t^n when you scale the space by a factor of t
I think the upshot of the conversation was that 0-dimensional volume of a shape should be its Euler characteristic. In the simple case of a finite set of points, the "volume" would be the number of points.
And by your earlier comment, span({}) consists of a single point, so its volume should be 1. It all works!
Yeah, it took a while to sink into my head that many of these "wait, why is span({}) = {0}?" kinds of cases have answers that sum up as "because anything else means other rules are inconsistent, and the whole thing is either less useful or useless". It's "arbitrary", but it's either the only useful option, or sometimes a simple(st) one of many.
Even just one number theory course helped a lot, since it brought that kind of consistency into its own concept, where [this set of rules] forms a ring, and [this set] forms a field, etc.
Plenty of things are intuitive if you have the right mental model backing it. I'd wager some folks thing of all/any as "at least one True"/"Everything is true", which makes it a trickier think.
Mental models often get spicy with empty/"corner" cases. This isn't quite the same, but a lot of kids struggle with division as sharing rather than division as measuring, which makes division by a number less than 1 conceptually difficult. http://langfordmath.com/ECEMath/Multiplication/DivModels.htm...
> there will be a sensible way to define the zero/empty case. This is often a good test of whether you have a uniform concept that works for all n without corner cases.
A polynomial always includes a member (monomial) with the power zero. It seems natural, therefore, to index the coefficients correspondingly. In other situations, 1 may be the more natural starting index.
The two properties you quoted are about the fact the distributivity works whether you're multiplying on the left or on the right. That's one possible left-right confusion, but I would argue most weak students believe it holds even when it doesn't, so in a sense they're too permissive in their reasoning.
However, syzarian's example is about a different left-right confusion: whether you can read an equality both forwards and backwards. I've seen this confusion in students: they will readily believe that you can distribute a factor over a sum (going forward), but be very skeptical about the act of factoring out (going backward), even though it's justified by the same equation. In this case, the students aren't permissive enough in their reasoning.
This is the kind of misconception (that equality has a "direction") that's much easier to suss out with an in-person interaction, whether it's with a teacher or other students.