
Figure 1
A depiction of a linearly inseparable problem, and possible computational solutions. A) A visualization of the input vector space of the bridge crossing problem with the possible states for driver 1 (DR1) on the x-axis and the possible states for driver 2 (DR2) on the y-axis. B) A model using flat binding. The two input units code for the lane each driver is using (left or right). The units in the hidden layers represent all possible options, going from both drivers driving in the left lane (top unit) to both driving in the right lane (bottom unit). The output units code for the possible responses: keep driving in your lane, or switch. C) A hierarchical binding model where the context determines the action taken by DR1. The context unit DR2 informs DR1 about the best possible course of action.

Figure 2
An example model displaying all modules, along with their units. (A) The control module (box top), a processing module (box left) with four input units (left) and two output units (right) and an integrator module (box right). The input units are fully connected with the output units via weights (arrows). During an experimental trial, the LFC might tag the first two input units for synchronization with the output units (dotted arrows), which is followed by synchronization of these tagged units via θ frequency-locked noise bursts sent by the MFC. (B) Illustration of a cortical column unit (grey). These units consist of one rate code neuron (R) and two phase code neurons (P). All cortical columns oscillate at a specific γ-frequency (46 Hz).

Figure 3
An overview of the three paradigm-specific models. The integrator unit is omitted for visualization purposes. A) Extensive learning model. Each of the 36 input units (left; only 8 units depicted for clarity) represents a stimulus. The output units represent the possible actions. In this example, the first four input units are synchronized with all output units via an interplay between the LFC and MFC. B) Stroop model. Input units 1 – 4 represent the written word, units 5 – 8 represent the word color, and units 9 – 10 represent the relevant stimulus dimension. The output units represent actions. In the panel, all color units are synchronized with all output units. C) WCST model. Input units 1 – 4 code for stimulus color, units 5 – 8 represent the number of shapes shown, and units 9 – 12 code for stimulus shape. The output units again represent possible actions. Here, the MFC and LFC cooperate to synchronize all color units with the output units.

Figure 4
Extensive learning simulation results for the Sync/Learn extensive training model. The left column (panels A, C and E) contains results for the fast timescale, whereas the right column (panels B, D, and F) focuses on the slow timescale. The first row (panels A and B) display RT results, the next row shows the number of errors that the models made on average (panels C and D), and the final row (panels E and F) visualizes how θ power fluctuates on each timescale.

Figure 5
Stroop model simulation results: RT, error rate and θ power. Different colors represent distinct models while columns denote the relevant stimulus dimension. In the left column, we plot RT (A), error (C) and θ power (E) as a function of task repetitions for trials where the color dimension was relevant. In the right column, RT (B), error (D) and θ power (F) were visualized, but now for trials where the word dimension was relevant. Green lines reflect learning, dark colors indicate that synchronization took place.

Figure 6
WCST model simulation results: RT, error rate and θ power. Different colors represent distinct models while columns denote the incongruency level. In the left column, we plot RT (A), error (C) and θ power (E) as a function of task repetitions, for trials without incongruency. In the right column, RT (B), error (D) and θ power (F) are visualized for the highest incongruency trials.
