A Subsymbolic Model of Schizophrenic Language

In this project, we are exploring possible causes of schizophrenia using DISCERN, a neural network-based model of human story processing. The system is first trained to paraphrase stories consisting of multiple scripts; then various "lesions" (simulated pathologies) are applied to the networks to model possible underlying causes of schizophrenia.

This demo makes it possible to observe the impaired storytelling graphically and in individual stories. Please note, however, that this site is primarily intended as a research tool, so not all options are always documented in detail. Please send any questions to uli (at) cs (dot) utexas (dot) edu.

In order to start exploring the ways in which different lesions impair story processing in DISCERN, do the following:

  1. Select the lesions you would like to compare, using the check boxes on the left.
  2. Select the plots you are interested in, using the check boxes on the right.
  3. Push the "Refresh plots" button below.
  4. Done!
The "[details]" links in the plot headings each lead to a separate page that shows a specific lesion in more detail, including the actual language output produced by DISCERN at various lesion intensities.

Select Lesions to Compare:

Hyperlearning-Story-Generator
Increased learning rates in the story generator module, simulating another possible effect of dopamine imbalance on memory consolidation.
Hyperlearning-Memory-Encoder
Increased learning rates in the memory encoder module, simulating aberrant memory consolidation possibly due to dopamine imbalance and resulting overactivation in the hippocampus.
Reduce_Network_Bias
(no description)
Increased-Network-Bias
Increased bias in the story generator, simulating abnormal arousal states that could produce both the under- and over-activation at a neuronal level seen in schizophrenia.
Semantic-Blurring
Blurring of word representations in semantic memory, simulating excessive coactivation of related words (overpriming) associated with schizophrenia.
Semantic-Noise
Semantic memory distortion with noise, suggested by altered word associations and fluency associated with derailment-type language in schizophrenia.
Semantic-Overactivation
Excessive activation in semantic memory, suggested by functional neuroimaging studies of patients with schizophrenia.
Working-Memory-Disconnection
Pruning of network connections in the story generator, simulating loss of cortical connectivity.
Working-Memory-Noise
Working memory distortion with noise, simulating excessive cortical noise and reduced efficiency in frontal cortex associated with schizophrenia.
Working-Memory-Gain-Reduction
(no description)

Select Plots:

Output
Overview of the DISCERN output at different lesion strengths. Split between filtered sentences, correctly reproduced sentences, and different kinds of errors.
Recall
Performance as a function of lesion strength.
Filter Performance
How good is the sentence filter at identifying errors? Positive (negative) means above (below) chance.
No of Frame Shifts
Number of jumps between stories.
Frame Shift Context
Frame shifts that cross context (i.e. personal -> gangster story or vice versa) vs. those that do not.
Derailed Propositions
No. of propositions inserted from another story.
Agency Shifts
Switching one word that denotes an agent (e.g. "Fred" or "lawyer") for another.
Lexical Errors
Word substitutions within lexical categories (but not both agents).
Ungrammatical Sentences
Sentences with word substitutions across lexical categories.
AS vs Other Word Insertions
Compares agency shifts with other lexical errors (within lexical category), e.g. "car" -> "gun".
AS Consistency
How likely are agency shifts to be repeated? No. of unique agency shifts vs. repeated agency shifts.
AS Context
Number of agency shifts that cross context (personal -> gangster or vice versa) vs. number of agency shifts within context.
AS Entropy
A measure of the randomness of agency shifts. Less entropy means agency shifts tend to be more consistent and follow a pattern.
Self Insertions
What percentage of agency shifts replace another agent with the "I" character?

Options:

Stories All Gangster only
Recall range-
Plot Size
Hide Key
Sentence Filter (0-100)
Hyperlearning-Memory-Encoder [details]
Hyperlearning-Story-Generator [details]
Increased-Network-Bias [details]
Reduce_Network_Bias [details]
Semantic-Blurring [details]
Semantic-Noise [details]
Semantic-Overactivation [details]
Working-Memory-Disconnection [details]
Working-Memory-Gain-Reduction [details]
Working-Memory-Noise [details]