2020 GACs

The 2020 CCN GAC lineup & schedule

>> individual GAC workshop schedules are below <<


In 2020, we received six excellent proposals for the inaugural year of the CCN Generative Adversarial Collaboration project. All six showed superb levels of scientific rigor, and garnered strong enthusiasm and engagement from the whole community. So, for this first round of the program, we’ve decided to accept all six proposals to launch the GAC series with a bang.

See all the proposals, and their open reviews, on OpenReview.net:

At this stage, we encourage the authors to also consider inviting members of the community to join their GAC. Although they’re not obligated to grow their teams, if this is something you’re interested in for a particular proposal, we encourage you to contact the GAC organizers to talk about options. Remember, though, that even if you don’t get involved in a GAC, you’ll have the opportunity to submit a reply to their position papers, to be published early 2021!


All GAC kickoff workshops were held on Zoom in October 2020. Further details of the resulting publications are being updated on the NBDT special issue page.


We are grateful for the enthusiastic participation in the GAC project so far by both organizers and the whole CCN community.

Individual workshop schedules

>> link to recording <<

Thursday, October 15, 2020, 15h00 - 18h00 UTC

4:00pm - 7:00pm London / 11:00am - 2:00pm New York / 8:00am - 11:00am Los Angeles

Workshop organizers: Jeff Beck, Ralf Haefner, Xaq Pitkow, Cristina Savin, & Eszter Vértes

How does neural activity represent probability distributions and how are probabilistic computations implemented in neural circuits?


Schedule

  • Intro to our GAC (25 minutes)

    • Intro to PPCs

    • Intro to DDCs

    • Intro to Neural Sampling

    • Relationships and Overview over the questions

  • What's the inference over? From what perspective? (30 minutes)

    • Summary over subtopics + comments

    • Prepared comments

    • GAC & community responses + comments

  • Universality/Equivalence classes/different generative models (30 minutes)

    • Summary over subtopics + comments

    • Prepared comments

    • GAC & community responses + comments

  • Optimality & feasibility / learnability (30 minutes)

    • Summary over subtopics + comments

    • Prepared comments

    • GAC & community responses + responses

  • Crucial experiments (30 minutes)

    • Summary over subtopics + comments

    • Prepared comments

    • GAC & community responses + responses

  • Fundamental critiques (30 minutes)

    • Prepared comments

    • GAC & community responses + responses

  • Free for all (remaining time)

    • Summary of day's events, conclusion

Read more on OpenReview here: https://openreview.net/forum?id=uZMMO2obl50

Full proposal here: https://openreview.net/pdf?id=uZMMO2obl50

>> link to recording <<

Monday, October 19, 2020, 15h00 - 18h00 UTC

4:00pm - 7:00pm London / 11:00am - 2:00pm New York / 8:00am - 11:00am Los Angeles

Workshop organizers: Anna Ivanova, Martin Schrimpf, Leyla Isik, Stefano Anzellotti, Noga Zaslavsky, & Evelina Fedorenko

Modern cognitive neuroscience heavily relies on linear mapping models. Such models are used to link patterns of brain activity and a measure X, where X can be a feature/function of the stimulus, a behavioral measure, or activity in another brain region. Three reasons underlie such popularity of linear methods:

(a) linear readout is considered to be neurally plausible,

(b) linear models are considered more interpretable,

(c) linear models are relatively easy to build and can generalize successfully even in small data regimes.

In this workshop, we will discuss whether these reasons apply to the same extent to different cognitive domains (e.g., vision vs. language), different recording techniques (e.g., single neuron vs. fMRI), and different modeling objectives (e.g., prediction vs. interpretation).


Key questions:

  1. In which cases should we prefer linear vs. nonlinear mapping models?

  2. How can we justify our model choice based on theoretical grounds?

  3. How can we justify our model choice empirically?

  4. Are linear mapping models more interpretable than nonlinear ones?

  5. Are linear mapping models more biologically plausible and does it matter?


Workshop schedule:


SETTING THE STAGE

  • Introduction (11:00am-11:05am)

    • GAC organizers

  • What is this proposal about? (11:05am-11:20am)

    • Anna Ivanova

  • Benchmarking and the value of quantitative models (11:20am-11:35am)

    • Martin Schrimpf

  • Q&A - challenging the assumptions (11:35am-11:50am)


DEBATE 1 - VISION

  • TEAM LINEAR (11:50am-12:00pm)

    • Leyla Isik

  • TEAM NONLINEAR (12:00pm-12:10pm)

    • Kohitij Kar

  • Q&A - (non)linear models in vision (12:10pm-12:20pm)


~~~~~~~~~~~~Break 12:20pm-12:30pm ~~~~~~~~~~~~


DEBATE 2 - LANGUAGE

  • TEAM NONLINEAR (12:30pm-12:40pm)

    • Mariya Toneva

  • TEAM LINEAR (12:40pm-12:50pm)

    • Jean-Remy King/Laura Gwilliams

  • Q&A - (non)linear models in language (12:50pm-1:00pm)


DISCUSSION

  • Panel discussion (1:00pm-1:50pm)

    • debaters + Anna Schapiro, Martin Hebart

  • Concluding remarks (1:50pm-2:00pm)

    • GAC organizers


Full proposal: https://openreview.net/pdf?id=-o0dOwashib

See the community’s comments on OpenReview: https://openreview.net/forum?id=-o0dOwashib

Is perception probabilistic?

>> link to recording <<

Wednesday, October 21, 2020, 15h00 - 18h00 UTC

4:00pm - 7:00pm London / 11:00am - 2:00pm New York / 8:00am - 11:00am Los Angeles

Workshop organizers: Dobromir Rahnev, Ned Block, Janneke Jehee & Rachel Denison

Discussion questions

  1. What is your definition of “probabilistic perception”? What specific coding schemes would count as non-probabilistic and what others would count as probabilistic?

  2. Do you think that perception is probabilistic or non-probabilistic (according to the definition you outlined)? Why? What are the strongest arguments for your view?

  3. How can the question of whether perception is probabilistic be addressed empirically (either via behavior or neural data)?


SCHEDULE

Part 1: Defining the controversy (up to 60 min)

· Presentation on possible probabilistic and non-probabilistic representations (~30 min)

· Audience discussion (up to 27 min)

3-minute break

Part 2: Arguments for and against probabilistic perception (up to 80 min)

· Short presentations from each speaker (~40 min)

o Rachel Denison (Team FOR)

o Janneke Jehee (Team FOR)

o Doby Rahnev (Team AGAINST)

o Ned Block (Team AGAINST)

· Audience discussion about relevant arguments for and against probabilistic perception (up to 37 min)

3-minute break

Part 3: Experimental ideas (up to 40 min)

· Presentation on experimental designs (~10 min)

· Audience discussion about experimental ideas (up to 30 min)


Read more on OpenReview here: https://openreview.net/forum?id=hdM6XsRIWHr

Full proposal here: https://openreview.net/pdf?id=hdM6XsRIWHr

>> link to recording <<

Thursday, October 22rd, 2020, 13h00 - 16h00 UTC

2:00pm - 5:00pm London / 9:00am - 12:00pm New York / 6:00am -9:00am Los Angeles

Workshop organizers: Jessica Taylor, Helen Barron, Xiaochuan Pan, Dasa Zeithamova, Masamichi Sakagami, & Aurelio Cortese

Overview

How do biological organisms generalize previously-learned information for adaptive behaviour in novel experiences? Two leading theories argue for two different mechanisms behind such behavioural flexibility. Integrative encoding postulates that coactivation of (a) representations of novel experiences, and (b) related episodic memories, allows for integration and re-encoding of information. This means that information from related episodic memories can be applied in novel situations. Category inference instead takes the position that the brain constructs abstract categories based on regularities (e.g., in perception, function, etc). If information is learned for one category member then this can be inferred to apply to other members of the same category, without the need for direct experience or direct one-to-one memory associations. In this workshop we will discuss evidence for each theory, potential evidence for a hybrid implementation of the two, and the means by which we might best resolve inconsistencies.

Key Questions

  1. Is generalization really just a simple memory association process? Or are further ‘higher order’ inference processes required?

  2. Do previous experimental tasks used to investigate generalization actually all ‘tap’ the same underlying cognitive and neural processes?

  3. Do we use the same underlying processes when generalization is made based on similarities in perception compared to similarities in function?

  4. Do we use the same underlying processes when generalization is made based on well-established associations/categories compared to based on newly formed associations/categories?

  5. In which cases does generalization depend more on neural activity during the learning of the initial association/category compared to neural activity at the time of the generalization response?

Schedule

1. Introduction (40 mins)

A Cortese and J Taylor

Overview of the topic to define important terms and how they are used, to raise key questions, and to introduce existing experimental approaches and conflicting results (30 mins)

> Audience questions and discussion (10 mins)

2. Views pitches & debate (50 min)

H Barron, D Zeithamova, X Pan + M Sakagami

Each talk is up to 8 min.

> Panelists and audience discussion (20 mins)

5 mins break

3. Experimental proposals (60 mins)

D Zeithamova, H Barron, X Pan, J Taylor + A Cortese

Presentations on experimental designs (up to 40 mins)

> Panelists and audience discussion (20 mins)

4. Concluding remarks and discussion (10-15 mins)

Panel discussion

Initial key questions will be readdressed. What did we find to best answer them? How can we tackle the issues? Etc


Read more on OpenReview here: https://openreview.net/forum?id=bYTPqOKLVmO

Full proposal here: https://openreview.net/pdf?id=bYTPqOKLVmO

>> link to recording <<

Friday, October 23rd, 2020, 13h00 - 16h00 UTC

2:00pm - 5:00pm London / 9:00am - 12:00pm New York / 6:00am -9:00am Los Angeles

Workshop organizers: Blake Richards, Claudia Clopath, Rui P Costa, Wolfgang Maass, Luke Prince, Arna Ghosh, Roy Eyono, Franz Scherr, & Martin Pernull

Does full-rank gradient descent accurately describe the dynamics of synaptic plasticity in biological recurrent neural networks?


Schedule

BACKGROUND (1h20m): Cover experimental and theoretical background to learning in RNNs

  • Katharina Wilmes:​ Overview of the experimental evidence to date, and the unknowns

  • Blake Richards:​ Overview of temporal credit assignment in RNNs (Backprop through time, real-time recurrent learning) and why it is not biologically plausible

  • Rui Ponte Costa:​ Overview of alternative approaches to learning in RNNs, including echo state networks, three-factor learning rules, equilibrium propagation

-------- break (10m) --------

DEBATE/DISCUSSION (1h30m): Panel discussion with Blake Richards, Rui Ponte Costa, Cristina Savin, Wolfgang Maass, Katharina Wilmes:

Audience instruction: indicate your prior opinion here https://www.menti.com/tvpkhuu9ww

Prompt panelists with three viewpoints to outline their current positions:

  • Is it important for the brain to estimate gradients through time?

  • Are there potential biological mechanisms for estimating gradients through time?

  • If so, how localized is such information in the brain?

  • Is plasticity at recurrent synapses key to learning and memory throughout life?

  • Could evolutionarily pre-wired recurrent circuits be enough to support learning?

Q&A session with Audience

Paths to resolution, continued points of uncertainty, future research directions


Read more on OpenReview here: https://openreview.net/forum?id=lc_LE6gX2NQ

Full proposal here: https://openreview.net/pdf?id=lc_LE6gX2NQ

Do grid codes afford generalization and flexible decision-making?

>> link to recording <<

Friday, October 23rd, 2020, 16h30 - 19h30 UTC

5:30pm - 8:30pm London / 12:30pm - 3:30pm New York / 9:30am -12:30pm Los Angeles

Workshop organizers: Linda Q. Yu, Seongmin A. Park, Sarah C. Sweigart, Matthew R. Nassar, & Erie D. Boorman

Grid-like codes in the human entorhinal cortex (EC) have been proposed to take part in learning and representing structural knowledge, which enables efficient planning and model-based decision-making. However, it has not yet been established whether these grid-like codes serve to transfer learning and to generalize previous experiences for inferences in novel situations. Do grid codes provide consistent context-invariant representations of structural relationships or do they flexibly switch their reference frame according to the current task demand to plan/discover optimal decision policy? In this workshop, we will hear from speakers investigating generalization and grid codes from both theoretical and experimental perspectives, and aim to arrive at an understanding of the constraints, conditions, and potential transformations of grid codes in the human brain that would afford flexible generalization.


Points of controversy

The two main viewpoints we presented in our GAC proposal are that either the grid code is involved in "learning" or "planning". The "learning" viewpoint is that the grid code is learnt as a form of the successor representation (i.e., cached trajectories), which can be used for rapid learning in a new context (e.g., if I'm looking for milk in a new grocery store I've never been before, I can rapidly transfer the relationship between the dairy aisle and the meat aisle from my past experiences of other grocery stores to this one). According to this hypothesis, the grid codes may be used to relate two entities on the task independent representation of cognitive maps. The "planning" viewpoint is the idea that the grid code is used in vector navigation,which can calculate novel trajectories (e.g., shortcuts) from one concept to another without having experienced it before. We further speculate that the grid-like codes in EC or those in other brain areas in which the grid cells were found in humans (e.g., mPFC, PCC) might be modulated by the current task demand. In our proposal, we outlined two experiments that we think could address these questions and their predictions under each hypothesis.


If you want to read more, our proposal document is here: https://openreview.net/pdf?id=pYaBfV8sBep


Discussion questions

  1. Can the grid codes represent unlearned relationships to afford novel inferences?

  2. When does the brain make a new map? - When different environments share the same structure, what makes the brain decide to reuse a previously learnt cognitive map to represent the current task structure or to create a new cognitive map?


Invited speakers

  • Anne Collins, UC Berkeley

  • Andrea Banino, Google Deepmind

  • Miriam Klein-Flügge, University of Oxford

  • Matthijs van der Meer, Dartmouth College


Schedule for workshop (Time zone: London New York Los Angeles)

  • Presentation of GAC topic and experimental proposal (40min) 5:30pm 12:30pm 9:30am

  • Theoretical Session (theories behind grid and non-grid based generalization): 6:10pm 1:10pm 10:10am

    • 20 min talk -- Anne Collins, UC Berkeley

      • Title of talk: “Generalization with hierarchical reinforcement learning”

    • 5 min - Q & A (clarifying questions) 6:35pm 1:35pm 10:35am

    • 20 min talk -- Andrea Banino, Deepmind

      • Title of talk: “Understanding spatial navigation: An intersection of brain science and artificial intelligence”

    • 5 min - Q & A (clarifying questions) 7:00pm 2:00pm 11:00am

    • 15 min panel discussion (theoretical questions to panelists)

  • Experimental Session (experimental findings on neural mechanisms of grid representations): 7:15pm 2:15pm 11:15am

    • 20 min talk -- Miriam Klein-Flügge, University of Oxford

      • Title of talk: “Grid-like representation of value space in medial frontal cortex guides value integration during novel decision-making in macaques”

    • 5 min - Q & A (clarifying questions) 7:40pm 2:40pm 11:40am

    • 20 min talk -- Matthijs van der Meer, Dartmouth University

      • Title of talk: “A shared representational geometry in the rodent hippocampus”

    • 5 min -Q & A (clarifying questions) 8:05pm 3:05pm 12:05pm

    • 15 min panel discussion (theoretical questions to panelists)

  • Closing remarks (10 min) 8:20pm 2:20pm 12:20pm