Incidentally, coming from a half-baked physics background: to me this feels very similar to how, as physics moved closer to the fundamental theories of GR and QFT, it was forced to adopt the mathematical framework of representation theory[1], which is to say, to reckon with the fact that
(a) a mathematical model like a group is a representation of a physical concept, not the concept itself
(b) this process of representing things by mathematical models has some properties that are inescapable, for instance the model must factor over the ways you can decompose the system into parts
(c) in particular there is some intrinsic coordinate-freedom to your choice of model. In physics, this could be the choice of say coordinate frame or a choice of algebraic system (matrices vs complex numbers vs whatever); in programming the choice of programming language or implementation detail or whatever else
(d) the coordinate-freedom is forced to align at interfaces between isolated systems. In physics this corresponds to the concept of particles (particularly gauge bosons like photons, less sure about fermions...); in programming corresponds to APIs and calling conventions and user interfaces---you can have all the freedom you want in the details but the boundaries are fixed by how they interop with each other.
all very hand-wavey since I understand neither side well... but I like to imagine that someday there will be a "representation theory of software" class in the curriculum (which would not dissimilar from the formal-language concepts of denotational/operational semantics, but maybe the overlaps with physics could be exploited somehow to share some language?)... it seems to me like things mathematically kinda have to go in something like this direction.
(a) a mathematical model like a group is a representation of a physical concept, not the concept itself
(b) this process of representing things by mathematical models has some properties that are inescapable, for instance the model must factor over the ways you can decompose the system into parts
(c) in particular there is some intrinsic coordinate-freedom to your choice of model. In physics, this could be the choice of say coordinate frame or a choice of algebraic system (matrices vs complex numbers vs whatever); in programming the choice of programming language or implementation detail or whatever else
(d) the coordinate-freedom is forced to align at interfaces between isolated systems. In physics this corresponds to the concept of particles (particularly gauge bosons like photons, less sure about fermions...); in programming corresponds to APIs and calling conventions and user interfaces---you can have all the freedom you want in the details but the boundaries are fixed by how they interop with each other.
all very hand-wavey since I understand neither side well... but I like to imagine that someday there will be a "representation theory of software" class in the curriculum (which would not dissimilar from the formal-language concepts of denotational/operational semantics, but maybe the overlaps with physics could be exploited somehow to share some language?)... it seems to me like things mathematically kinda have to go in something like this direction.
[1] https://en.wikipedia.org/wiki/Representation_theory