I wrote a small Common Lisp book for Springer Verlag about the same time that Peter wrote this fantastic book and I then met him shortly thereafter at a Lisp Users Vendors conference in San Diego. After 30 years of following his writing, Python notebooks, etc., I think that he just has a higher level view of software and algorithms.
There is some talk on this thread about good old fashioned symbolic AI. I have mixed feelings about this. I am currently working for an all-in Common Lisp + GOFAI company, but most of my best successes in my career involved neural networks, and later deep learning.
I think we need a new hybrid approach but it is above my skill level to know what that would be. I keep hoping to see some new research and new paradigms.
We’re building that paradigm right now, in the math community.
Homotopy type theory tells us that semantic information from a symbolic logic can also be represented by the topology of diagrams. But this is a two way relationship: the topological structure of diagrams also corresponds to some synthetic type theory.
The conjecture is that the topology of, eg, word2vec point clouds will correspond to some synthetic type theory describing the data — and this is bolstered by recent advancements in ML: Facebook translating by aligning embedding geometries, data covering models, etc.
I’m personally working on the problem of translating types into diagrams stored as matrices, in the hope that building one direction will give insights into the other. (Again, because equivalence relationships are symmetric.)
There is some talk on this thread about good old fashioned symbolic AI. I have mixed feelings about this. I am currently working for an all-in Common Lisp + GOFAI company, but most of my best successes in my career involved neural networks, and later deep learning.
I think we need a new hybrid approach but it is above my skill level to know what that would be. I keep hoping to see some new research and new paradigms.