Skip to content

Commit

Permalink
fix main branch links
Browse files Browse the repository at this point in the history
  • Loading branch information
kkoreilly committed Aug 19, 2024
1 parent ab1df8c commit 9bf8167
Show file tree
Hide file tree
Showing 6 changed files with 8 additions and 8 deletions.
2 changes: 1 addition & 1 deletion ch10/sem/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Back to [All Sims](https://github.com/CompCogNeuro/sims) (also for general info

This network is trained using Hebbian learning on paragraphs from an early draft of the *Computational Explorations* textbook, allowing it to learn about the overall statistics of when different words co-occur with other words, and thereby learning a surprisingly capable (though clearly imperfect) level of semantic knowlege about the topics covered in the textbook. This replicates the key results from the *Latent Semantic Analysis* research by [Landauer and Dumais (1997)](#references).

The `Input` layer has one unit for each different word that appeared with a frequency of 5 or higher (and excluding purely function words like "the" etc) -- 1920 words in total. Each paragraph is presented as a single input pattern during training, with each word in the paragraph activated in the input (if the same word appears multiple times, it still just has the same unit activation). After each such paragraph, Hebbian learning between input and active `Hidden` layer neurons takes place, using our standard BCM-style learning mechanism, as explored earlier in the [v1rf](https://github.com/CompCogNeuro/sims/blob/master/ch6/v1rf/README.md) and [self_org](https://github.com/CompCogNeuro/sims/blob/master/ch6/self_org/README.md) projects. This model also includes recurrent lateral excitatory and inhibitory connections just like `v1rf`, which can induce a topological organization of neurons. Unlike in the visual model, the high-dimensional nature of semantics makes this somewhat harder to understand but nevertheless the same principles are likely at work.
The `Input` layer has one unit for each different word that appeared with a frequency of 5 or higher (and excluding purely function words like "the" etc) -- 1920 words in total. Each paragraph is presented as a single input pattern during training, with each word in the paragraph activated in the input (if the same word appears multiple times, it still just has the same unit activation). After each such paragraph, Hebbian learning between input and active `Hidden` layer neurons takes place, using our standard BCM-style learning mechanism, as explored earlier in the [v1rf](https://github.com/CompCogNeuro/sims/blob/main/ch6/v1rf/README.md) and [self_org](https://github.com/CompCogNeuro/sims/blob/main/ch6/self_org/README.md) projects. This model also includes recurrent lateral excitatory and inhibitory connections just like `v1rf`, which can induce a topological organization of neurons. Unlike in the visual model, the high-dimensional nature of semantics makes this somewhat harder to understand but nevertheless the same principles are likely at work.

This network takes a while to train, so we will start by loading in pre-trained weights.

Expand Down
2 changes: 1 addition & 1 deletion ch2/neuron/neuron.go
Original file line number Diff line number Diff line change
Expand Up @@ -388,7 +388,7 @@ func (ss *Sim) ConfigNetView(nv *netview.NetView) {
// ConfigGUI configures the Cogent Core GUI interface for this simulation.
func (ss *Sim) ConfigGUI() {
title := "Neuron"
ss.GUI.MakeBody(ss, "neuron", title, `This simulation illustrates the basic properties of neural spiking and rate-code activation, reflecting a balance of excitatory and inhibitory influences (including leak and synaptic inhibition). See <a href="https://github.com/emer/leabra/blob/master/examples/neuron/README.md">README.md on GitHub</a>.</p>`)
ss.GUI.MakeBody(ss, "neuron", title, `This simulation illustrates the basic properties of neural spiking and rate-code activation, reflecting a balance of excitatory and inhibitory influences (including leak and synaptic inhibition). See <a href="https://github.com/emer/leabra/blob/main/examples/neuron/README.md">README.md on GitHub</a>.</p>`)
ss.GUI.CycleUpdateInterval = 10

nv := ss.GUI.AddNetView("NetView")
Expand Down
2 changes: 1 addition & 1 deletion ch3/face_categ/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ You should see the network process the face input and activate the appropriate o

## Using Cluster Plots to Understand the Categorization Process

A [ClusterPlot](https://github.com/CompCogNeuro/sims/blob/master/ch3/face_categ/ClusterPlot.md) provides a convenient way of visualizing the similarity relationships among a set of items, where multiple different forms of similarity may be in effect at the same time (i.e., multidimensional similarity structure). If unfamiliar with these, please click that link to read more about how to read a cluster plot. First, we'll look at the cluster plot of the input faces, and then of the different categorizations performed on them, to see how the network transforms the similarity structure to extract the relevant information and collapse across the irrelevant.
A [ClusterPlot](https://github.com/CompCogNeuro/sims/blob/main/ch3/face_categ/ClusterPlot.md) provides a convenient way of visualizing the similarity relationships among a set of items, where multiple different forms of similarity may be in effect at the same time (i.e., multidimensional similarity structure). If unfamiliar with these, please click that link to read more about how to read a cluster plot. First, we'll look at the cluster plot of the input faces, and then of the different categorizations performed on them, to see how the network transforms the similarity structure to extract the relevant information and collapse across the irrelevant.

* Press the `Cluster Plots` button in the toolbar, and then click on the `eplot.Plot2D` button next to the `ClustFaces` line in the control panel on the left. This will pull up a cluster plot run on the face `Input` layer images.

Expand Down
2 changes: 1 addition & 1 deletion ch7/hip/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Back to [All Sims](https://github.com/CompCogNeuro/sims) (also for general info

# Introduction

In this exploration of the hippocampus model, we will use the same basic AB--AC paired associates list learning paradigm as we used in the standard cortical network previously (`abac`). The hippocampus should be able to learn the new paired associates (AC) without causing undue levels of interference to the original AB associations (see Figure 1), and it should be able to do this much more rapidly than was possible in the cortical model. This model is using the newer *Theta Phase* model of the hippocampus ([Ketz, Morkanda & O'Reilly, 2013](#references)), where the EC <-> CA1 projections along with all the other connections have an error-driven learning component organized according to the theta phase rhythm. See [leabra hip](https://github.com/emer/leabra/tree/master/hip) on github for more implementational details.
In this exploration of the hippocampus model, we will use the same basic AB--AC paired associates list learning paradigm as we used in the standard cortical network previously (`abac`). The hippocampus should be able to learn the new paired associates (AC) without causing undue levels of interference to the original AB associations (see Figure 1), and it should be able to do this much more rapidly than was possible in the cortical model. This model is using the newer *Theta Phase* model of the hippocampus ([Ketz, Morkanda & O'Reilly, 2013](#references)), where the EC <-> CA1 projections along with all the other connections have an error-driven learning component organized according to the theta phase rhythm. See [leabra hip](https://github.com/emer/leabra/tree/main/hip) on github for more implementational details.

![AB-AC Data](fig_ab_ac_data_catinf.png?raw=true "AB-AC Data")

Expand Down
4 changes: 2 additions & 2 deletions ch8/pvlv/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ The overarching idea behind the PVLV model [OReilly et al, 2007](#references) is
![PV.1](fig_bvpvlv_pv_lv_only.png?raw=true "PV.1")

**Figure 1:** Simplified diagram of major components of the PVLV model, with the LV Learned Value component in the Amygdala and PV Primary Value component in the Ventral Striatum (principally the Nucleus Accumbens Core, NAc). LHb: Lateral Habenula, RMTg: RostroMedial Tegmentum, PPTg: PendunculoPontine Tegmentum, LHA: Lateral Hypothalamus, PBN: Parabrachial Nucleus.
See [PVLV Code](https://github.com/emer/leabra/tree/master/pvlv) for a more detailed figure and description of the implementation.
See [PVLV Code](https://github.com/emer/leabra/tree/main/pvlv) for a more detailed figure and description of the implementation.

# Basic Appetitive Conditioning

Expand Down Expand Up @@ -192,7 +192,7 @@ Note how the negative `VTAp_act` (black) and positive `LHbRMTg_act` (blue) activ

**Tip:** You may want to switch back and forth with the `NetView` tab to watch the activity of the layers as stimuli are presented. If so, switch back to `TrialTypeData` to continue.

At the end of conditioned inhibition training three test trials are run: A alone, X alone, and AX. (Reward is never presented in any case). Note that the network shows a dopamine dip to the conditioned inhibitor (X) meaning that it has acquired negative valence, in accordance with the [Tobler et al., 2003](#references) data. This is caused by activity in the `LHbRMTg`, which reflects activity of the `VSMatrixPosD2` that has learned an association of the X conditioned inhibitor with reward omission. See [PVLV Code](https://github.com/emer/leabra/tree/master/pvlv) if you wish to learn more about the computations of the various ventral striatum and amygdala layers in the network.
At the end of conditioned inhibition training three test trials are run: A alone, X alone, and AX. (Reward is never presented in any case). Note that the network shows a dopamine dip to the conditioned inhibitor (X) meaning that it has acquired negative valence, in accordance with the [Tobler et al., 2003](#references) data. This is caused by activity in the `LHbRMTg`, which reflects activity of the `VSMatrixPosD2` that has learned an association of the X conditioned inhibitor with reward omission. See [PVLV Code](https://github.com/emer/leabra/tree/main/pvlv) if you wish to learn more about the computations of the various ventral striatum and amygdala layers in the network.

> **Optional Question** Why does the network continue to show a partial dopamine burst to the A stimulus when it is presented alone? Hint: You may want to watch the network run again and note the different trial types. What is the purpose of interleaving A_Rf trials with the AX trials?
Expand Down
4 changes: 2 additions & 2 deletions ch9/sir/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ As discussed in the Executive Function Chapter, electrophysiological recordings

In summary, correct performance of the task in this model requires BG gating of *Store* information into the PFCmnt stripe, and then *not* gating any further *Ignore* information into that stripe, and finally appropriate gating in PFCout on the *Recall* trial. This sequence of gating actions must be learned strictly through trial-and-error exploration, shaped by a simple *Rescorla-Wagner* (RW) style dopamine-based reinforcement learning system located on the left-bottom area of the model (see the Motor Control and Reinforcement Learning chapter for details). The key point is that this system can learn the predicted reward value of cortical states and use errors in predictions to trigger dopamine bursts and dips that train striatal gating policies.

To review the functions of the other layers in the PBWM framework (see the [pbwm](https://github.com/emer/leabra/blob/master/pbwm) repository for more complete info):
To review the functions of the other layers in the PBWM framework (see the [pbwm](https://github.com/emer/leabra/blob/main/pbwm) repository for more complete info):

* **Matrix**: this is the dynamic gating system representing the matrix units within the dorsal striatum of the basal ganglia. The bottom layer contains the "Go" (direct pathway) units, while top layer contains "NoGo" (indirect pathway). As in the earlier BG model, the Go units, expressing more D1 receptors, increase their weights from dopamine bursts, and decrease weights from dopamine dips, and vice-versa for the NoGo units with more D2 receptors. As is more consistent with the BG biology than earlier versions of this model, most of the competition to select the final gating action happens in the GPe and GPi (with the hyperdirect pathway to the subthalamic nucleus also playing a critical role, but not included in this more abstracted model), with only a relatively weak level of competition within the Matrix layers. Note that we have combined the maintenance and output gating stripes all in the same Matrix layer -- this allows these stripes to all compete with each other here, and more importantly in the subsequent GPi and GPe stripes -- this competitive interaction is critical for allowing the system to learn to properly coordinate maintenance when it is appropriate to update/store new information for maintenance vs. when it is important to select from currently stored representations via output gating.

Expand Down Expand Up @@ -72,7 +72,7 @@ Now we will explore how the Matrix gating is driven in terms of learned synaptic

> **Question 9.8:** Explain how these weights from S,I,R inputs to the Matrix stripes makes sense in terms of how the network actually solved the task, including how the Store information was maintained, and when it was output, and why the Ignore trials did not disturb the stored information.
If you want to experience the full power of the PBWM learning framework, you can check out the [sir2](https://github.com/emer/leabra/blob/master/examples/sir2) model, which takes the SIR task to the next level with two independent streams of maintained information. Here, the network has to store and maintain multiple items and selectively recall each of them depending on other cues, which is a more demanding task that networks without selective gating capabilities cannot achieve. That version more strongly stresses the selective maintenance gating aspect of the model (and indeed this problem motivated the need for a BG in the first place).
If you want to experience the full power of the PBWM learning framework, you can check out the [sir2](https://github.com/emer/leabra/blob/main/examples/sir2) model, which takes the SIR task to the next level with two independent streams of maintained information. Here, the network has to store and maintain multiple items and selectively recall each of them depending on other cues, which is a more demanding task that networks without selective gating capabilities cannot achieve. That version more strongly stresses the selective maintenance gating aspect of the model (and indeed this problem motivated the need for a BG in the first place).

# References

Expand Down

0 comments on commit 9bf8167

Please sign in to comment.