Check the Meetings table below for the topics of future meetings.
Below are the upcoming journal clubs, as well as summaries of previous meetings.
Date | Presenter | Reading |
---|
It has been widely recorded that the population neuronal activities pocess the low dimensional manifold in multiple brain regions, especially PFC. The origin of low dimensional dynamics and the relation between the dynamical properties and the network structures remain open questions. One of the potential solutions is that low dimensional dynamics is generated by low-rank network architectures. This work trained low-rank recurrent neural networks to perform 5 distinct cognitive tasks respectively, and theoretically analyzed the network dynamics performing computation for each task. Their work showed that very few ranks (1-2) of network structure are actually required to well perform those cognitive tasks. For those tasks with flexible input-target mapping, multiple cell-types (sub-populations) are necessary to perform tasks. Overall, their theory of low-rank RNN can extract the effective latent dynamics for computation, and furthermore provide a framework to networks with multitasking ability.
Mante et. al. showed how neurons with complex response coordinate together to do computations of selective integrations in monkey PFC. They trained an siRNN to model the psychophysical behavior of monkeys. By analyzing the modeled siRNN using theory of linear dynamical system, the response of siRNN fits almost perfectly with monkey data in the population level. Furthermore, siRNN produced a novel mechanism to unify selection and integration in a single circuit in terms of line attractor and selection vector.
Human brain can perform various of cognitive tasks and is able to flexibly learn new tasks without interfering other tasks. Whether and how the learning and computing capability is inherited from the brain connectomes remains unknown. This work tried to link the learning function and brain connectome in the framework of reservoir computing. They showed that the brain connectome outperform random network at crtical dynamical regime. Futhermore, they found that functional parcellation helps regulate the information flow which might facilitate the cognitive computation in brain
Catastrophic forgetting is a key issue in continual learning paradigm. Training algorithms, like FORCE, seem to be able to bypass this to some extent. Chen and Barak applied fixed point analysis to explicitly show the change of fixed point structure of networks during training in continual learning scenario. Their work provide intuitions about how learning algorithm and the order of task sequence affect the training in continual learning.
Multi-solution is a prominant feature of ANNs (DNNs/RNNs) when training to perform certain tasks. Is there any common feature between different solutions remains an open questions. This works found that the topology of fixed points of trained network is the universally shared between different network architectures and realizations when those networks are trained for the same task. Further, they demonstrated the topological structure of fixed points of networks indeed interprets computation mechanism of trained networks.