I've been working in systems neuroscience for a few years (something of a combination lab tech/student, so full disclosure, not an actual expert).
Based on my experience with model organisms (flies & rats, primarily), it is actually pretty amazing how analogous the techniques and goals used in this sort of research are to those we use in systems neuroscience. At a very basic level, the primary task of correlating neuron activation to a given behavior is exactly the same. However, ML researchers benefit from data being trivial to generate and entire brains being analyzable in one shot as a result, whereas in animal research elucidating the role of neurons in a single circuit costs millions of dollars and many researcher-years.
The similarities between the two are so clear that I noticed that in its Microscope tool [1], OpenAI even refers to the models they are studying as "model organisms", an anthropomorphization which I find very apt. Another article I saw a while back on HN which I thought was very cool was [2], which describes the task of identifying the role of a neuron responsible for a particular token of output. This one is especially analogous because it operates on such a small scale, much closer to what systems neuroscientists studying model organisms do.
Based on my experience with model organisms (flies & rats, primarily), it is actually pretty amazing how analogous the techniques and goals used in this sort of research are to those we use in systems neuroscience. At a very basic level, the primary task of correlating neuron activation to a given behavior is exactly the same. However, ML researchers benefit from data being trivial to generate and entire brains being analyzable in one shot as a result, whereas in animal research elucidating the role of neurons in a single circuit costs millions of dollars and many researcher-years.
The similarities between the two are so clear that I noticed that in its Microscope tool [1], OpenAI even refers to the models they are studying as "model organisms", an anthropomorphization which I find very apt. Another article I saw a while back on HN which I thought was very cool was [2], which describes the task of identifying the role of a neuron responsible for a particular token of output. This one is especially analogous because it operates on such a small scale, much closer to what systems neuroscientists studying model organisms do.
[1] https://openai.com/research/microscope [2] https://clementneo.com/posts/2023/02/11/we-found-an-neuron